4-D camera could improve robot vision, virtual reality and self-driving cars

August 7, 2017
Two 138-degree light field panoramas (top and center) and a depth estimate of the second panorama (bottom). Credit: Stanford Computational Imaging Lab and Photonic Systems Integration Laboratory at UC San Diego

Engineers at Stanford University and the University of California San Diego have developed a camera that generates four-dimensional images and can capture 138 degrees of information. The new camera—the first-ever single-lens, wide field of view, light field camera—could generate information-rich images and video frames that will enable robots to better navigate the world and understand certain aspects of their environment, such as object distance and surface texture.

The researchers also see this technology being used in autonomous vehicles and augmented and virtual reality technologies. Researchers presented their new technology at the computer vision conference CVPR 2017 in July.

"We want to consider what would be the right camera for a robot that drives or delivers packages by air. We're great at making cameras for humans but do robots need to see the way humans do? Probably not," said Donald Dansereau, a postdoctoral fellow in electrical engineering at Stanford and the first author of the paper.

The project is a collaboration between the labs of electrical engineering professors Gordon Wetzstein at Stanford and Joseph Ford at UC San Diego.

UC San Diego researchers designed a spherical lens that provides the camera with an extremely wide field of view, encompassing nearly a third of the circle around the camera. Ford's group had previously developed the spherical lenses under the DARPA "SCENICC" (Soldier CENtric Imaging with Computational Cameras) program to build a compact video camera that captures 360-degree images in high resolution, with 125 megapixels in each video frame. In that project, the video camera used fiber optic bundles to couple the spherical images to conventional flat focal planes, providing high-performance but at high cost.

The new camera uses a version of the spherical lenses that eliminates the fiber bundles through a combination of lenslets and digital signal processing. Combining the optics design and system integration hardware expertise of Ford's lab and the signal processing and algorithmic expertise of Wetzstein's lab resulted in a digital solution that not only leads to the creation of these extra-wide images but enhances them.

The new camera also relies on a technology developed at Stanford called photography, which is what adds a fourth dimension to this camera—it captures the two-axis direction of the light hitting the lens and combines that information with the 2-D image. Another noteworthy feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction. Robots could use this technology to see through rain and other things that could obscure their vision.

"One of the things you realize when you work with an omnidirectional camera is that it's impossible to focus in every direction at once—something is always close to the camera, while other things are far away," Ford said. "Light field imaging allows the captured video to be refocused during replay, as well as single-aperture depth mapping of the scene. These capabilities open up all kinds of applications in VR and robotics."

"It could enable various types of artificially intelligent technology to understand how far away objects are, whether they're moving and what they're made of," Wetzstein said. "This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it."

And while this camera can work like a conventional camera at far distances, it is also designed to improve close-up images. Examples where it would be particularly useful include robots that have to navigate through small areas, landing drones and self-driving cars. As part of an augmented or virtual reality system, its depth information could result in more seamless renderings of real scenes and support better integration between those scenes and virtual components.

The is currently at the proof-of-concept stage and the team is planning to create a compact prototype to test on a robot.

Explore further: Lensless camera technology for adjusting video focus after image capture

More information: Technical paper: www.computationalimaging.org/w … 04/LFMonocentric.pdf

Related Stories

Ultra-thin camera creates images without lenses

June 22, 2017

Traditional cameras—even those on the thinnest of cell phones—cannot be truly flat due to their optics: lenses that require a certain shape and size in order to function. At Caltech, engineers have developed a new camera ...

Tiny foveated imaging camera mimics eagle vision

February 16, 2017

(Tech Xplore)—A team of researchers with the University of Stuttgart has used advanced 3-D printing technology to create an extremely small camera that uses foveated imaging to mimic natural eagle vision. In their paper ...

Recommended for you

Star mergers: A new test of gravity, dark energy theories

December 18, 2017

When scientists recorded a rippling in space-time, followed within two seconds by an associated burst of light observed by dozens of telescopes around the globe, they had witnessed, for the first time, the explosive collision ...

Single-photon detector can count to four

December 15, 2017

Engineers have shown that a widely used method of detecting single photons can also count the presence of at least four photons at a time. The researchers say this discovery will unlock new capabilities in physics labs working ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.