Researchers use quantum-inspired approach to increase LiDAR resolution
Researchers have shown that a quantum-inspired technique can be used to perform LiDAR imaging with a much higher depth resolution than is possible with conventional approaches. LiDAR, which uses laser pulses to acquire 3D information about a scene or object, is usually best suited for imaging large objects such as topographical features or built structures due to its limited depth resolution.
"Although LiDAR can be used to image the overall shape of a person, it typically doesn't capture finer details such as facial features," said research team leader Ashley Lyons from the University of Glasgow in the United Kingdom. "By adding extra depth resolution, our approach could capture enough detail to not only see facial features but even someone's fingerprints."
In Optics Express, Lyons and first author Robbie Murray describe the new technique, which they call imaging two-photon interference LiDAR. They show that it can distinguish reflective surfaces less than 2 millimeters apart and create high-resolution 3D images with micron-scale resolution.
"This work could lead to much higher resolution 3D imaging than is possible now, which could be useful for facial recognition and tracking applications that involve small features," said Lyons. "For practical use, conventional LiDAR could be used to get a rough idea of where an object might be and then the object could be carefully measured with our method."
Using classically entangled light
The new technique uses "quantum inspired" interferometry, which extracts information from the way that two light beams interfere with each other. Entangled pairs of photons—or quantum light—are often used for this type of interferometry, but approaches based on photon entanglement tend to perform poorly in situations with high levels of light loss, which is almost always the case for LiDAR. To overcome this problem, the researchers applied what they've learned from quantum sensing to classical (non-quantum) light.
"With quantum entangled photons, only so many photon pairs per unit time can be generated before the setup becomes very technically demanding," said Lyons. "These problems don't exist with classical light, and it is possible to get around the high losses by turning up the laser power."
When two identical photons meet at a beam splitter at the same time they will always stick together, or become entangled, and leave in the same direction. Classical light shows the same behavior but to a lesser degree—most of the time classical photons go in the same direction. The researchers used this property of classical light to very precisely time the arrival of one photon by looking at when two photons simultaneously arrive at detectors.
Enhancing depth resolution
"The time information gives us the ability to perform depth ranging by sending one of those photons out onto the 3D scene and then timing how long it takes for that photon to come back," said Lyons. "Thus, two-photon interference LiDAR works much like conventional LiDAR but allows us to more precisely time how long it takes for that photon to reach the detector, which directly translates into greater depth resolution."
The researchers demonstrated the high depth resolution of two-photon interference LiDAR by using it to detect the two reflective surfaces of a piece of glass about 2 millimeters thick. Traditional LiDAR wouldn't be able to distinguish these two surfaces, but the researchers were able to clearly measure the two surfaces. They also used the new method to create a detailed 3D map of a 20-pence coin with 7-micron depth resolution. This shows that the method could capture the level of detail necessary to differentiate key facial features or other differences between people.
Two-photon interference LiDAR also works very well at the single-photon level, which could enhance more complex imaging approaches used for non-line-of-sight imaging or imaging through highly scattering media.
Currently, acquiring the images takes a long time because it requires scanning across all three spatial dimensions. The researchers are working to make this process faster by reducing the amount of scanning necessary to acquire 3D information.