Speed-of-light 'nano-camera' produces 3D translucent objects

Inexpensive ‘nano-camera’ can operate at the speed of light
MIT students (left to right) Ayush Bhandari, Refael Whyte and Achuta Kadambi pose next to their "nano-camera" that can capture translucent objects, such as a glass vase, in 3-D. Credit: BRYCE VICKMARK

A $500 "nano-camera" that can operate at the speed of light has been developed by researchers in the MIT Media Lab.

The three-dimensional , which was presented last week at Siggraph Asia in Hong Kong, could be used in medical imaging and collision-avoidance detectors for cars, and to improve the accuracy of motion tracking and gesture-recognition devices used in interactive gaming.

The camera is based on "Time of Flight" technology like that used in Microsoft's recently launched second-generation Kinect device, in which the location of objects is calculated by how long it takes a to reflect off a surface and return to the sensor. However, unlike existing devices based on this technology, the new camera is not fooled by rain, fog, or even translucent objects, says co-author Achuta Kadambi, a at MIT.

"Using the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D," Kadambi says. "That is because the light that bounces off the transparent object and the background smear into one pixel on the camera. Using our technique you can generate 3-D models of translucent or near-transparent objects."

In a conventional Time of Flight camera, a light signal is fired at a scene, where it bounces off an object and returns to strike the pixel. Since the is known, it is then simple for the camera to calculate the distance the signal has travelled and therefore the depth of the object it has been reflected from.

Unfortunately though, changing environmental conditions, semitransparent surfaces, edges, or motion all create multiple reflections that mix with the original signal and return to the camera, making it difficult to determine which is the correct measurement.

Instead, the new device uses an encoding technique commonly used in the telecommunications industry to calculate the distance a signal has travelled, says Ramesh Raskar, an associate professor of media arts and sciences and leader of the Camera Culture group within the Media Lab, who developed the method alongside Kadambi, Refael Whyte, Ayush Bhandari, and Christopher Barsi at MIT and Adrian Dorrington and Lee Streeter from the University of Waikato in New Zealand.

"We use a new method that allows us to encode information in time," Raskar says. "So when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal."

The idea is similar to existing techniques that clear blurring in photographs, says Bhandari, a graduate student in the Media Lab. "People with shaky hands tend to take blurry photographs with their cellphones because several shifted versions of the scene smear together," Bhandari says. "By placing some assumptions on the model—for example that much of this blurring was caused by a jittery hand—the image can be unsmeared to produce a sharper picture."

The new model, which the team has dubbed nanophotography, unsmears the individual optical paths.

In 2011 Raskar's group unveiled a trillion-frame-per-second camera capable of capturing a single pulse of light as it travelled through a scene. The camera does this by probing the scene with a femtosecond impulse of light, then uses fast but expensive laboratory-grade optical equipment to take an image each time. However, this "femto-camera" costs around $500,000 to build.

In contrast, the new "nano-camera" probes the scene with a continuous-wave signal that oscillates at nanosecond periods. This allows the team to use inexpensive hardware—off-the-shelf light-emitting diodes (LEDs) can strobe at nanosecond periods, for example—meaning the camera can reach a time resolution within one order of magnitude of femtophotography while costing just $500.

"By solving the multipath problem, essentially just by changing the code, we are able to unmix the light paths and therefore visualize light moving across the scene," Kadambi says. "So we are able to get similar results to the $500,000 camera, albeit of slightly lower quality, for just $500."

Conventional cameras see an average of the light arriving at the sensor, much like the human eye, says James Davis, an associate professor of computer science at the University of California at Santa Cruz. In contrast, the researchers in Raskar's laboratory are investigating what happens when they take a camera fast enough to see that some light makes it from the "flash" back to the camera sooner, and apply sophisticated computation to the resulting data, Davis says.

"Normally the computer scientists who could invent the processing on this data can't build the devices, and the people who can build the devices cannot really do the computation," he says. "This combination of skills and techniques is really unique in the work going on at MIT right now."

What's more, the basic technology needed for the team's approach is very similar to that already being shipped in devices such as the new version of Kinect, Davis says. "So it's going to go from expensive to cheap thanks to video games, and that should shorten the time before people start wondering what it can be used for," he says. "And by the time that happens, the MIT group will have a whole toolbox of methods available for people to use to realize those dreams."

Explore further

New algorithms improve animations featuring fog, smoke and underwater scenes

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation: Speed-of-light 'nano-camera' produces 3D translucent objects (2013, November 26) retrieved 20 October 2019 from https://phys.org/news/2013-11-speed-of-light-nano-camera-3d-translucent.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Nov 26, 2013
This comment has been removed by a moderator.

Nov 26, 2013
just use infrared light

Nov 26, 2013
Expiorer mumbled or blurted
"just use infrared light"
Really ? What a nice idea.

Isnt that the technique already used, or are you claiming any visible light flashes are too short to see ?


Or maybe you were thinking the LED mentioned in the physorg article couldn't be IR ?

IR & UV LEDs are common and cheap, have several in my lab.

Did Expiorer mean something else perhaps - if so please elaborate so we can be precise as to the source of your comment (and learn something new, I love being an intellectual sponge) ) ?

Nov 27, 2013
Wow, a camera that works at the speed of light!

In other news: a stereo that works at the speed of sound!

Nov 27, 2013
To expand on my previous comment: the entire title of this otherwise rather interesting article is completely inaccurate, misleading, and mostly plain wrong!

- there is nothing about something moving at speed of light in this article. The only remarkable thing related to what is colloquially referred to as speed is the camera framerate, which in fact is a frequency, not a speed - much less the speed of light!
- the term 'nano-camera' begs for an explanation - since it doesn't describe a physical property of the 'camera' it is misleading without that!
- it doesn't 'produce' translucent objects, it is capable to accurately scan it - that is the exact opposite

Nov 29, 2013
I'd like to publicly congratulate and thank the team of students and researchers who developed this camera. They've been able to do something at 1/1000th the cost than before which is an accomplishment. Hopefully this technology will develop into something that improves our lives.

I can't quite think of anything along those lines, but I know there are minds out there that might.

I'd be interested in that, more so than discussions of semantics regarding the choices of words made by those summarizing the research or the editors who don't have the breadth and depth of knowledge of readers here.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more