Stanford researchers developing 3-D camera with 12,616 lenses

Mar 19, 2008
Stanford researchers developing 3-D camera with 12,616 lenses
The testing platform for the multi-aperture image sensor chip.

The camera you own has one main lens and produces a flat, two-dimensional photograph, whether you hold it in your hand or view it on your computer screen. On the other hand, a camera with two lenses (or two cameras placed apart from each other) can take more interesting 3-D photos.

But what if your digital camera saw the world through thousands of tiny lenses, each a miniature camera unto itself? You'd get a 2-D photo, but you'd also get something potentially more valuable: an electronic "depth map" containing the distance from the camera to every object in the picture, a kind of super 3-D.

Stanford electronics researchers, lead by electrical engineering Professor Abbas El Gamal, are developing such a camera, built around their "multi-aperture image sensor." They've shrunk the pixels on the sensor to 0.7 microns, several times smaller than pixels in standard digital cameras. They've grouped the pixels in arrays of 256 pixels each, and they're preparing to place a tiny lens atop each array.

"It's like having a lot of cameras on a single chip," said Keith Fife, a graduate student working with El Gamal and another electrical engineering professor, H.-S. Philip Wong. In fact, if their prototype 3-megapixel chip had all its micro lenses in place, they would add up to 12,616 "cameras."

Point such a camera at someone's face, and it would, in addition to taking a photo, precisely record the distances to the subject's eyes, nose, ears, chin, etc. One obvious potential use of the technology: facial recognition for security purposes.

But there are a number of other possibilities for a depth-information camera: biological imaging, 3-D printing, creation of 3-D objects or people to inhabit virtual worlds, or 3-D modeling of buildings.

The technology is expected to produce a photo in which almost everything, near or far, is in focus. But it would be possible to selectively defocus parts of the photo after the fact, using editing software on a computer

Knowing the exact distance to an object might give robots better spatial vision than humans and allow them to perform delicate tasks now beyond their abilities. "People are coming up with many things they might do with this," Fife said. The three researchers published a paper on their work in the February edition of the IEEE ISSCC Digest of Technical Papers.

Their multi-aperture camera would look and feel like an ordinary camera, or even a smaller cell phone camera. The cell phone aspect is important, Fife said, given that "the majority of the cameras in the world are now on phones."

Here's how it works:

The main lens (also known as the objective lens) of an ordinary digital camera focuses its image directly on the camera's image sensor, which records the photo. The objective lens of the multi-aperture camera, on the other hand, focuses its image about 40 microns (a micron is a millionth of a meter) above the image sensor arrays. As a result, any point in the photo is captured by at least four of the chip's mini-cameras, producing overlapping views, each from a slightly different perspective, just as the left eye of a human sees things differently than the right eye.

The outcome is a detailed depth map, invisible in the photograph itself but electronically stored along with it. It's a virtual model of the scene, ready for manipulation by computation. "You can choose to do things with that image that you weren't able to do with the regular 2-D image," Fife said. "You can say, 'I want to see only the objects at this distance,' and suddenly they'll appear for you. And you can wipe away everything else."

Or the sensor could be deployed naked, with no objective lens at all. By placing the sensor very close to an object, each micro lens would take its own photo without the need for an objective lens. It has been suggested that a very small probe could be placed against the brain of a laboratory mouse, for example, to detect the location of neural activity.

Other researchers are headed toward similar depth-map goals from different approaches. Some use intelligent software to inspect ordinary 2-D photos for the edges, shadows or focus differences that might infer the distances of objects. Others have tried cameras with multiple lenses, or prisms mounted in front of a single camera lens. One approach employs lasers; another attempts to stitch together photos taken from different angles, while yet another involves video shot from a moving camera.

But El Gamal, Fife and Wong believe their multi-aperture sensor has some key advantages. It's small and doesn't require lasers, bulky camera gear, multiple photos or complex calibration. And it has excellent color quality. Each of the 256 pixels in a specific array detects the same color. In an ordinary digital camera, red pixels may be arranged next to green pixels, leading to undesirable "crosstalk" between the pixels that degrade color.

The sensor also can take advantage of smaller pixels in a way that an ordinary digital camera cannot, El Gamal said, because camera lenses are nearing the optical limit of the smallest spot they can resolve. Using a pixel smaller than that spot will not produce a better photo. But with the multi-aperture sensor, smaller pixels produce even more depth information, he said.

The technology also may aid the quest for the huge photos possible with a gigapixel camera—that's 140 times as many pixels as today's typical 7-megapixel cameras. The first benefit of the Stanford technology is straightforward: Smaller pixels mean more pixels can be crowded onto the chip.

The second benefit involves chip architecture. With a billion pixels on one chip, some of them are sure to go bad, leaving dead spots, El Gamal said. But the overlapping views provided by the multi-aperture sensor provide backups when pixels fail.

The researchers are now working out the manufacturing details of fabricating the micro-optics onto a camera chip.

The finished product may cost less than existing digital cameras, the researchers say, because the quality of a camera's main lens will no longer be of paramount importance. "We believe that you can reduce the complexity of the main lens by shifting the complexity to the semiconductor," Fife said.

Source: Stanford University

Explore further: PsiKick's batteryless sensors poised for coming 'Internet of things'

add to favorites email to friend print save as pdf

Related Stories

Smartphone cameras step closer to DSLR cameras

Feb 27, 2014

(AP)—Expect sharper, clearer selfies this year. Samsung Electronics Co. has beefed up the camera in its Galaxy S5 smartphone due for April release and added smarter camera software, following Sony and Nokia ...

Big chill sets in as RHIC physics heats up

Feb 04, 2014

If you think it's been cold outside this winter, that's nothing compared to the deep freeze setting in at the Relativistic Heavy Ion Collider (RHIC), the early-universe-recreating "atom smasher" at the U.S. ...

Weight loss program for infrared cameras

Feb 03, 2014

Infrared sensors can be employed in a wide range of applications, such as driver assistance systems for vehicles or thermography for buildings. A new camera is providing a test bed for development of new ...

Recommended for you

Large streams of data warn cars, banks and oil drillers

Apr 16, 2014

Better warning systems that alert motorists to a collision, make banks aware of the risk of losses on bad customers, and tell oil companies about potential problems with new drilling. This is the aim of AMIDST, the EU project ...

User comments : 7

Adjust slider to filter visible comments by rank

Display comments: newest first

earls
3 / 5 (2) Mar 19, 2008
I wish there were photo results to check out. I guess you'd need a 3D engine or something similar to view them though.

Nice C/P job. ;)
gopher65
2 / 5 (2) Mar 19, 2008
I think the initial idea is more of a topographical map of an object earls. For things like better Biometric Identification, and easy creation of 3D computer models from physical models. But you're right, I don't see any reason why those maps couldn't be converted into 3D models that a game-like graphics engine could run.
a_n_k_u_r
5 / 5 (3) Mar 20, 2008
If the idea is to have 3D picture, probably 2 or 3 lenses would have sufficed -- I expect that they too would capture distance from each point. Why are 12,616 lenses required? Now, of course, there must be some reason that these guys had. But that needs to be brought out in the article.
cybrbeast
5 / 5 (1) Mar 20, 2008
This already works with nine lenses. Everything in focus
http://www.youtub...ZGaw7rWY
gopher65
2 / 5 (1) Mar 20, 2008
As I said, it's for things that require very precise distances measurements, like biometric identification. You *could* use this for normal 3D imaging, but it seems like overkill to use a camera like this when lesser models would do.
GBogumil
not rated yet Mar 21, 2008
one issue with porting to a 3d engine is that each photo is from only one perspective.. so when you move around you don't always have the information from that perspective
Falcon
not rated yet Oct 03, 2008
If the idea is to have 3D picture, probably 2 or 3 lenses would have sufficed -- I expect that they too would capture distance from each point. Why are 12,616 lenses required? Now, of course, there must be some reason that these guys had. But that needs to be brought out in the article.
Could it be used as a kind of sonar in space?

More news stories

Venture investments jump to $9.5B in 1Q

Funding for U.S. startup companies soared 57 percent in the first quarter to a level not seen since 2001, as venture capitalists piled more money into an increasing number of deals, according to a report due out Friday.

White House updating online privacy policy

A new Obama administration privacy policy out Friday explains how the government will gather the user data of online visitors to WhiteHouse.gov, mobile apps and social media sites. It also clarifies that ...

Hackathon team's GoogolPlex gives Siri extra powers

(Phys.org) —Four freshmen at the University of Pennsylvania have taken Apple's personal assistant Siri to behave as a graduate-level executive assistant which, when asked, is capable of adjusting the temperature ...

Scientists tether lionfish to Cayman reefs

Research done by U.S. scientists in the Cayman Islands suggests that native predators can be trained to gobble up invasive lionfish that colonize regional reefs and voraciously prey on juvenile marine creatures.

Leeches help save woman's ear after pit bull mauling

(HealthDay)—A pit bull attack in July 2013 left a 19-year-old woman with her left ear ripped from her head, leaving an open wound. After preserving the ear, the surgical team started with a reconnection ...