Computational method improves the resolution of time-of-flight depth sensors 1,000-fold

Computational method improves the resolution of time-of-flight depth sensors 1,000-fold
Comparing of the cascaded GHz approach with Kinect-style approaches visually represented on a key. From left to right, the original image, a Kinect-style approach, a GHz approach, and a stronger GHz approach. Credit: Massachusetts Institute of Technology

For the past 10 years, the Camera Culture group at MIT's Media Lab has been developing innovative imaging systems—from a camera that can see around corners to one that can read text in closed books—by using "time of flight," an approach that gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor.

In a new paper appearing in IEEE Access, members of the Camera Culture group present a new approach to time-of-flight imaging that increases its 1,000-fold. That's the type of that could make self-driving cars practical.

The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of .

At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That's good enough for the assisted-parking and collision-detection systems on today's cars.

But as Achuta Kadambi, a joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper, explains, "As you increase the range, your resolution goes down exponentially. Let's say you have a long-range scenario, and you want your car to detect an object further away so it can make a fast update decision. You may have started at 1 centimeter, but now you're back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life."

At distances of 2 meters, the MIT researchers' system, by contrast, has a depth resolution of 3 micrometers. Kadambi also conducted tests in which he sent a signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system. Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter.

Kadambi is joined on the paper by his thesis advisor, Ramesh Raskar, an associate professor of media arts and sciences and head of the Camera Culture group.

Slow uptake

With time-of-flight imaging, a short burst of light is fired into a scene, and a camera measures the time it takes to return, which indicates the distance of the object that reflected it. The longer the light burst, the more ambiguous the measurement of how far it's traveled. So light-burst length is one of the factors that determines system resolution.

The other factor, however, is detection rate. Modulators, which turn a light beam off and on, can switch a billion times a second, but today's detectors can make only about 100 million measurements a second. Detection rate is what limits existing time-of-flight systems to centimeter-scale resolution.

There is, however, another imaging technique that enables higher resolution, Kadambi says. That technique is interferometry, in which a is split in two, and half of it is kept circulating locally while the other half—the "sample beam"—is fired into a visual scene. The reflected sample beam is recombined with the locally circulated light, and the difference in phase between the two beams—the relative alignment of the troughs and crests of their electromagnetic waves—yields a very precise measure of the distance the sample beam has traveled.

But interferometry requires careful synchronization of the two light beams. "You could never put interferometry on a car because it's so sensitive to vibrations," Kadambi says. "We're using some ideas from interferometry and some of the ideas from LIDAR, and we're really combining the two here."

On the beat

They're also, he explains, using some ideas from acoustics. Anyone who's performed in a musical ensemble is familiar with the phenomenon of "beating." If two singers, say, are slightly out of tune—one producing a pitch at 440 hertz and the other at 437 hertz—the interplay of their voices will produce another tone, whose frequency is the difference between those of the notes they're singing—in this case, 3 hertz.

The same is true with light pulses. If a time-of-flight imaging system is firing light into a scene at the rate of a billion pulses a second, and the returning light is combined with light pulsing 999,999,999 times a second, the result will be a pulsing once a second—a rate easily detectable with a commodity video camera. And that slow "beat" will contain all the phase information necessary to gauge distance.

But rather than try to synchronize two high-frequency light signals—as interferometry systems must—Kadambi and Raskar simply modulate the returning signal, using the same technology that produced it in the first place. That is, they pulse the already pulsed light. The result is the same, but the approach is much more practical for automotive systems.

"The fusion of the optical coherence and electronic coherence is very unique," Raskar says. "We're modulating the light at a few gigahertz, so it's like turning a flashlight on and off millions of times per second. But we're changing that electronically, not optically. The combination of the two is really where you get the power for this system."

Through the fog

Gigahertz optical systems are naturally better at compensating for fog than lower-frequency systems. Fog is problematic for time-of-flight systems because it scatters light: It deflects the returning light signals so that they arrive late and at odd angles. Trying to isolate a true signal in all that noise is too computationally challenging to do on the fly.

With low-frequency systems, scattering causes a slight shift in phase, one that simply muddies the signal that reaches the detector. But with high-frequency systems, the phase shift is much larger relative to the frequency of the signal. Scattered light signals arriving over different paths will actually cancel each other out: The troughs of one wave will align with the crests of another. Theoretical analyses performed at the University of Wisconsin and Columbia University suggest that this cancellation will be widespread enough to make identifying a true signal much easier.


Explore further

A faster single-pixel camera: New technique greatly reduces the number of exposures necessary for 'lensless imaging'

More information: Rethinking machine vision time of flight with GHz heterodyning web.media.mit.edu/~achoo/beat/ … ambi_beat_lowres.pdf
Citation: Computational method improves the resolution of time-of-flight depth sensors 1,000-fold (2017, December 21) retrieved 26 June 2019 from https://phys.org/news/2017-12-method-resolution-time-of-flight-depth-sensors.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
174 shares

Feedback to editors

User comments

Dec 21, 2017
Very nice work. I can definitely see this being handled with a single chip solution which will also make these detectors pretty inexpensive.

Dec 21, 2017
What happens when another such lidar is also pulsing light into the scene?

How about twenty?

Dec 22, 2017
@Eikka: And what happens when a hundred photographers flashes the same subject? Absolutely nothing, some photo gets burned but everyone will have the photo he need to publish

Dec 29, 2017
And what happens when a hundred photographers flashes the same subject? Absolutely nothing


Not the same thing, as cameras are not measuring the time of flight of the flash. Cameras are passive sensors - not active radars that use self-correlation to arrive at the result.

Absolutely nothing, some photo gets burned but everyone will have the photo he need to publish


If a hundred photographers happen to flash the same subject at the same time, they all get an over-exposed image. Nevertheless, if some get the picture and some don't, that doesn't mean they all have a correct picture available to each. If these photographers were cars, some would crash, and so everyone would end up in the pile anyways.

Obviously, the signal resolution must degrade with added noise in the measurement.

Dec 29, 2017
In a practical situation, if you have a DSLR, it will have a rolling shutter curtain and the flash is timed to the moment when the curtain is all the way open. Common cellphone cameras and other consumer cameras have CMOS sensors which operate in a similiar way.

If other people are flashing randomly, you not only get an over-exposed frame, you also get shadows of the shutter curtain super-imposed on the picture from those flashes that happen when the curtain was just opening or closing.

The effect looks like this:
http://cache.love...Time.jpg

It's ruined. You can't photoshop that away.

Dec 29, 2017
Whether this makes onto my future car or not, I think this will have a lot of other applications.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more