A faster single-pixel camera: New technique greatly reduces the number of exposures necessary for 'lensless imaging'

March 30, 2017 by Larry Hardesty
Researchers from the MIT Media Lab developed a new technique that makes image acquisition using compressed sensing 50 times as efficient. In the case of the single-pixel camera, it could get the number of exposures down from thousands to dozens. Examples of this compressive ultrafast imaging technique are show on the bottom rows. Credit: Massachusetts Institute of Technology

Compressed sensing is an exciting new computational technique for extracting large amounts of information from a signal. In one high-profile demonstration, for instance, researchers at Rice University built a camera that could produce 2-D images using only a single light sensor rather than the millions of light sensors found in a commodity camera.

But using compressed sensing for image acquisition is inefficient: That "single-pixel camera" needed thousands of exposures to produce a reasonably clear image. Reporting their results in the journal IEEE Transactions on Computational Imaging, researchers from the MIT Media Lab now describe a new technique that makes image acquisition using compressed sensing 50 times as efficient. In the case of the single-pixel camera, it could get the number of exposures down from thousands to dozens.

One intriguing aspect of compressed-sensing imaging systems is that, unlike conventional cameras, they don't require lenses. That could make them useful in harsh environments or in applications that use wavelengths of outside the visible spectrum. Getting rid of the lens opens new prospects for the design of imaging systems.

"Formerly, imaging required a lens, and the lens would map pixels in space to in an array, with everything precisely structured and engineered," says Guy Satat, a graduate student at the Media Lab and first author on the new paper.  "With computational imaging, we began to ask: Is a lens necessary?  Does the sensor have to be a structured array? How many pixels should the sensor have? Is a single pixel sufficient? These questions essentially break down the fundamental idea of what a camera is.  The fact that only a single pixel is required and a lens is no longer necessary relaxes major design constraints, and enables the development of novel imaging systems. Using ultrafast sensing makes the measurement significantly more efficient."  

Recursive applications

One of Satat's coauthors on the new paper is his thesis advisor, associate professor of media arts and sciences Ramesh Raskar. Like many projects from Raskar's group, the new compressed-sensing technique depends on time-of-flight imaging, in which a short burst of light is projected into a scene, and ultrafast sensors measure how long the light takes to reflect back.

The technique uses time-of-flight imaging, but somewhat circularly, one of its potential applications is improving the performance of time-of-flight cameras. It could thus have implications for a number of other projects from Raskar's group, such as a camera that can see around corners and visible-light imaging systems for medical diagnosis and vehicular navigation.

Many prototype systems from Raskar's Camera Culture group at the Media Lab have used time-of-flight cameras called streak cameras, which are expensive and difficult to use: They capture only one row of image pixels at a time. But the past few years have seen the advent of commercial time-of-flight cameras called SPADs, for single-photon avalanche diodes.

Though not nearly as fast as streak cameras, SPADs are still fast enough for many time-of-flight applications, and they can capture a full 2-D image in a single exposure. Furthermore, their sensors are built using manufacturing techniques common in the computer chip industry, so they should be cost-effective to mass produce.

With SPADs, the electronics required to drive each sensor pixel take up so much space that the pixels end up far apart from each other on the sensor chip. In a conventional camera, this limits the resolution. But with compressed sensing, it actually increases it.

Getting some distance

The reason the single-pixel camera can make do with one light sensor is that the light that strikes it is patterned. One way to pattern light is to put a filter, kind of like a randomized black-and-white checkerboard, in front of the flash illuminating the scene. Another way is to bounce the returning light off of an array of tiny micromirrors, some of which are aimed at the and some of which aren't.

The sensor makes only a single measurement—the cumulative intensity of the incoming light. But if it repeats the measurement enough times, and if the light has a different pattern each time, software can deduce the intensities of the light reflected from individual points in the scene.

The single- was a media-friendly demonstration, but in fact, compressed sensing works better the more pixels the sensor has. And the farther apart the pixels are, the less redundancy there is in the measurements they make, much the way you see more of the visual scene before you if you take two steps to your right rather than one. And, of course, the more measurements the sensor performs, the higher the resolution of the reconstructed image.

Economies of scale

Time-of-flight imaging essentially turns one measurement—with one light pattern—into dozens of measurements, separated by trillionths of seconds. Moreover, each measurement corresponds with only a subset of pixels in the final image—those depicting objects at the same distance. That means there's less information to decode in each measurement.

In their paper, Satat, Raskar, and Matthew Tancik, an MIT graduate student in electrical engineering and computer science, present a theoretical analysis of compressed sensing that uses time-of-flight information. Their analysis shows how efficiently the technique can extract information about a visual scene, at different resolutions and with different numbers of sensors and distances between them.

They also describe a procedure for computing light patterns that minimizes the number of exposures. And, using synthetic data, they compare the performance of their reconstruction algorithm to that of existing compressed-sensing algorithms. But in ongoing work, they are developing a prototype of the system so that they can test their algorithm on real data.

Explore further: Team develops camera that uses sensors with just 1,000 pixels

More information: Guy Satat et al. Lensless Imaging with Compressive Ultrafast Sensing, IEEE Transactions on Computational Imaging (2017). DOI: 10.1109/TCI.2017.2684624

Related Stories

Bell Labs researchers build camera with no lens

June 4, 2013

(Phys.org) —A small team of researchers at Bell Labs in New Jersey has built a camera that has no lens. Instead, as they explain in their paper they've uploaded to the preprint server arXiv, the camera uses a LCD array, ...

Recommended for you

Gravitational waves may oscillate, just like neutrinos

September 21, 2017

(Phys.org)—Using data from the first-ever gravitational waves detected last year, along with a theoretical analysis, physicists have shown that gravitational waves may oscillate between two different forms called "g" and ...

Detecting cosmic rays from a galaxy far, far away

September 21, 2017

In an article published today in the journal Science, the Pierre Auger Collaboration has definitively answered the question of whether cosmic particles from outside the Milky Way Galaxy. The article, titled "Observation of ...

New technique accurately digitizes transparent objects

September 21, 2017

A new imaging technique makes it possible to precisely digitize clear objects and their surroundings, an achievement that has eluded current state-of-the-art 3D rendering methods. The ability to create detailed, 3D digital ...

Physicists publish new findings on electron emission

September 21, 2017

Even more than 100 years after Einstein's explanation of photoemission the process of electron emission from a solid material upon illumination with light still poses challenging surprises. In the report now published in ...

Rapid imaging of granular matter

September 21, 2017

Granular systems such as gravel or powders can be found everywhere, but studying them is not easy. Researchers at ETH Zurich have now developed a method by which pictures of the inside of granular systems can be taken ten ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.