[spb_text_block pb_margin_bottom=”no” pb_border_bottom=”no” width=”1/1″ el_position=”first last”]
Researchers at the Ishikawa Watanabe Laboratory have created magic (or at least a really cool projection mapping demo). Using some really expensive and fancy equipment, they can projection map onto moving objects at extremely high speeds. We’ve seen projection mapping onto moving objects with Box, the Miley Cyrus tour and FaceHacking/Omote, but nothing nearly this fast. Go ahead and re-watch the Omote video. The actors/display surfaces look like they are moving through molasses.
The main issue with real-time projection mapping is getting everything to work fast enough. In order to projection map an object in real-time you need:
- The camera (or depth-camera) observes the scene.
- The camera sends this data to the computer.
- The computer processes this data to track the object
- The computer renders an image to display on the object.
- This data is sent to the projector.
- The projector displays this data.
[/spb_text_block] [spb_text_block pb_margin_bottom=”no” pb_border_bottom=”no” width=”1/1″ el_position=”first last”]
As an example, take your hand an wave it up and down 1ft, so that you complete a cycle in 1 second. Your hand was just moving at 24 inches/sec. If you wanted to project an image onto your moving hand, and have the projection be less than 1/2 inch off, then you would need the entire process of steps 1-6 to take less than 1/48 of a second, or ~21 ms. Unfortunately, web cameras can have a latency of greater than 100ms. The video projector adds even more latency. If you have an 80 Hz video projector, then you get a frame every 12 ms, but video projectors frequently buffer 3-4 frames in memory to do processing on the image, resulting in up to 50 ms latency. Add in your data transfer time, computation for tracking and rendering and you could easily see 200 ms latency. So, when you are waving your hand, this would make your image 4.8 inches off. Ouch!
[/spb_text_block] [spb_single_image image=”2040″ image_size=”full” frame=”noframe” intro_animation=”none” full_width=”no” lightbox=”yes” link_target=”_self” width=”1/1″ el_position=”first last”] [spb_text_block pb_margin_bottom=”no” pb_border_bottom=”no” width=”1/1″ el_position=”first last”]
So how does Lumipen work so fast? First, they use a super expensive high speed (1000 fps) camera to observe the scene and track the sphere. Second, instead of changing the projected image really fast, they use a high speed mirror called a Saccade Mirror (seen below), to change the location of the projected image really fast. The Saccade mirror has < 1 ms latency, providing a projected image that virtually sticks to the physical object. Lumipen 2.0 uses a retro-reflective material as the background, which enables easy tracking of foreground objects.
[/spb_text_block] [spb_single_image image=”2035″ image_size=”full” frame=”noframe” intro_animation=”none” full_width=”no” lightbox=”yes” link_target=”_self” width=”1/1″ el_position=”first last”] [spb_single_image image=”2036″ image_size=”full” frame=”noframe” intro_animation=”none” full_width=”no” lightbox=”yes” link_target=”_self” width=”1/1″ el_position=”first last”] [spb_text_block pb_margin_bottom=”no” pb_border_bottom=”no” width=”1/1″ el_position=”first last”]
This is actually the second incarnation of the project, Lumipen. See the video below and more on Lumipen 1.0.
[/spb_text_block] [spb_video link=”http://www.youtube.com/watch?v=ZuSUHuSceYc” full_width=”no” width=”1/1″ el_position=”first last”] [spb_text_block pb_margin_bottom=”no” pb_border_bottom=”no” width=”1/1″ el_position=”first last”]
Check out more of their work here.
[/spb_text_block]