Trillion-frame-per-second video

Dec 13, 2011 by Larry Hardesty
Media Lab postdoc Andreas Velten, left, and Associate Professor Ramesh Raskar with the experimental setup they used to produce slow-motion video of light scattering through a plastic bottle. Photo: M. Scott Brauer

By using optical equipment in a totally unexpected way, MIT researchers have created an imaging system that makes light look slow.

MIT researchers have created a new that can acquire visual data at a rate of one trillion exposures per second. That’s fast enough to produce a slow-motion video of a burst of light traveling the length of a one-liter bottle, bouncing off the cap and reflecting back to the bottle’s bottom.

Media Lab postdoc Andreas Velten, one of the system’s developers, calls it the “ultimate” in slow motion: “There’s nothing in the universe that looks fast to this ,” he says.

The system relies on a recent technology called a streak camera, deployed in a totally unexpected way. The aperture of the streak camera is a narrow slit. Particles of light — photons — enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit. Because the electric field is changing very rapidly, it deflects late-arriving photons more than it does early-arriving ones.

The image produced by the camera is thus two-dimensional, but only one of the dimensions — the one corresponding to the direction of the slit — is spatial. The other dimension, corresponding to the degree of deflection, is time. The image thus represents the time of arrival of photons passing through a one-dimensional slice of space.

This video is not supported by your browser at this time.
Video: Melanie Gonick

The camera was intended for use in experiments where light passes through or is emitted by a chemical sample. Since chemists are chiefly interested in the wavelengths of light that a sample absorbs, or in how the intensity of the emitted light changes over time, the fact that the camera registers only one spatial dimension is irrelevant.

But it’s a serious drawback in a video camera. To produce their super-slow-mo videos, Velten, Media Lab Associate Professor Ramesh Raskar and Moungi Bawendi, the Lester Wolfe Professor of Chemistry, must perform the same experiment — such as passing a light pulse through a bottle — over and over, continually repositioning the streak camera to gradually build up a two-dimensional image. Synchronizing the camera and the laser that generates the pulse, so that the timing of every exposure is the same, requires a battery of sophisticated optical equipment and exquisite mechanical control. It takes only a nanosecond — a billionth of a second — for light to scatter through a bottle, but it takes about an hour to collect all the data necessary for the final video. For that reason, Raskar calls the new system “the world’s slowest fastest camera.”

Doing the math

After an hour, the researchers accumulate hundreds of thousands of data sets, each of which plots the one-dimensional positions of photons against their times of arrival. Raskar, Velten and other members of Raskar’s Camera Culture group at the Media Lab developed algorithms that can stitch that raw data into a set of sequential two-dimensional images.

Trillion-frame-per-second video
One of the things that distinguishes the researchers' new system from earlier high-speed imaging systems is that it can capture light 'scattering' below the surfaces of solid objects, such as the tomato depicted here. Image: Di Wu and Andreas Velten, MIT Media Lab

The streak camera and the laser that generates the light pulses — both cutting-edge devices with a cumulative price tag of $250,000 — were provided by Bawendi, a pioneer in research on quantum dots: tiny, light-emitting clusters of semiconductor particles that have potential applications in quantum computing, video-display technology, biological imaging, solar cells and a host of other areas.

The trillion-frame-per-second imaging system, which the researchers have presented both at the Optical Society's Computational Optical Sensing and Imaging conference and at Siggraph, is a spinoff of another Camera Culture project, a camera that can see around corners. That camera works by bouncing light off a reflective surface — say, the wall opposite a doorway — and measuring the time it takes different photons to return. But while both systems use ultrashort bursts of laser light and streak cameras, the arrangement of their other optical components and their reconstruction algorithms are tailored to their disparate tasks.

Because the ultrafast-imaging system requires multiple passes to produce its videos, it can’t record events that aren’t exactly repeatable. Any practical applications will probably involve cases where the way in which light scatters — or bounces around as it strikes different surfaces — is itself a source of useful information. Those cases may, however, include analyses of the physical structure of both manufactured materials and biological tissues — “like ultrasound with light,” as Raskar puts it.

As a longtime camera researcher, Raskar also sees a potential application in the development of better camera flashes. “An ultimate dream is, how do you create studio-like lighting from a compact flash? How can I take a portable camera that has a tiny flash and create the illusion that I have all these umbrellas, and sport lights, and so on?” asks Raskar, the NEC Career Development Associate Professor of Media Arts and Sciences. “With our ultrafast imaging, we can actually analyze how the photons are traveling through the world. And then we can recreate a new photo by creating the illusion that the photons started somewhere else.”

“It’s very interesting work. I am very impressed,” says Nils Abramson, a professor of applied holography at Sweden’s Royal Institute of Technology. In the late 1970s, Abramson pioneered a technique called light-in-flight holography, which ultimately proved able to capture images of light waves at a rate of 100 billion frames per second.

But as Abramson points out, his technique requires so-called coherent light, meaning that the troughs and crests of the light waves that produce the image have to line up with each other. “If you happen to destroy the coherence when the light is passing through different objects, then it doesn’t work,” Abramson says. “So I think it’s much better if you can use ordinary light, which Ramesh does.”

Indeed, Velten says, “As photons bounce around in the scene or inside objects, they lose coherence. Only an incoherent detection method like ours can see those photons.” And those photons, Velten says, could let researchers “learn more about the material properties of the objects, about what is under their surface and about the layout of the scene. Because we can see those photons, we could use them to look inside objects — for example, for medical imaging, or to identify materials.”

“I’m surprised that the method I’ve been using has not been more popular,” Abramson adds. “I’ve felt rather alone. I’m very glad that someone else is doing something similar. Because I think there are many interesting things to find when you can do this sort of study of the itself.”

Explore further: Researchers discover cool-burning flames in space, could lead to better engines on earth (w/ Video)

Related Stories

Laser-based camera can see around corners

Nov 17, 2010

(PhysOrg.com) -- Researchers from MIT have developed a camera that can capture images of a scene that is not in its direct line of sight. The camera is equipped with a femtosecond laser, which fires extremely ...

Better glasses-free 3-D: A fundamentally new approach

May 04, 2011

Nintendo's 3DS portable gaming system, the first commercial device with a glasses-free 3-D screen, has been available in the United States for barely a month, and it’s already sold more than a million ...

Faster computer graphics

Jun 13, 2011

Photographs of moving objects are almost always a little blurry — or a lot blurry, if the objects are moving rapidly enough. To make their work look as much like conventional film as possible, game and ...

Recommended for you

Printing the metals of the future

32 minutes ago

3-D printers can create all kinds of things, from eyeglasses to implantable medical devices, straight from a computer model and without the need for molds. But for making spacecraft, engineers sometimes need ...

3D printing helps designers build a better brick

3 hours ago

Using 3-D printing and advanced geometry, a team at Cornell has developed a new kind of building material – interlocking ceramic bricks that are lightweight, need no mortar and make efficient use of materials.

User comments : 30

Adjust slider to filter visible comments by rank

Display comments: newest first

tigger
Dec 13, 2011
This comment has been removed by a moderator.
MarkyMark
5 / 5 (6) Dec 13, 2011
Actually seeing the light waves....... So cool.
antialias_physorg
2.3 / 5 (3) Dec 13, 2011
This is awesome - and the applications are endless.
From rangefinding to characterization of chemical processes to visualizations of protein configurations.

Very impressive.

A way to get an even more super slo mo might be to pass the light through a cold cloud of atoms to slow it down before coming to the electric fied of the slit.
Nanobanano
3.2 / 5 (5) Dec 13, 2011
This is pretty amazing.

Must require insane amounts of memory though, like even half of a billionth of a second would be 500 frames.

It's incredible how you can watch the wave-fronts of the light expanding as they move.

Certainly a win for the wave theory of light.
Bowler_4007
2 / 5 (4) Dec 13, 2011
How is this possible?

i thought light speed would have limited the fps to under 300 million.

Gotta admit though it is cool.
bugmenot23
1 / 5 (1) Dec 13, 2011
please don't tell me they really made this cool movie without removing the coca-cola label on the bottle....
Nanobanano
2.8 / 5 (5) Dec 13, 2011
How is this possible?

i thought light speed would have limited the fps to under 300 million.

Gotta admit though it is cool.


Light speed wouldn't limit the number of frames.

That's probably actually unlimited, except by the technology's ability to store the data in memory.

Photons can be leaving the source object at different fractions of a trillionth of a second, or reflect from an imperfection or different surface a fraction of a trillionth of a second later, so they could arrive at a hypothetical camera offset by a ten-trillionth or a 100 trillionth of a second.

Frames per second is probably only limited hypothetically at around 10^-44 seconds.

Of course, the problem would be designing some thing capable to detecting that, which would clearly require the absolute limits of nano-technology.
tai_eastpole_ca
not rated yet Dec 13, 2011
Why are we seeing the light? Has it been scattered by dust (or air!) along the path? It's kind of weird that the article reads as if (but doesn't say) that the photons are giving off photons :) -- if we are "seeing photons," how?

With visible laser light, the usual answer is "the light is scattered from the beam path by dust or water droplets" but there's no indication of that here.
Jack5
5 / 5 (6) Dec 13, 2011
What you're actually seeing there is not the wave nature of light at all. The repeated bands in the images are actually the pulse train emitted by the laser and each bright band is made of a a very large number of photons. Some of these photons are scattered towards the camera by the object being illuminated, which is how the image is made.
rawa1
1.6 / 5 (5) Dec 13, 2011
Why are we seeing the light?
My problem rather is, the travelling wavefront is at the different place, than we should observe it, because the travelling of photons from dispersed point to camera takes considerable time. I don't know what this camera is trying to illustrate, but it's definitely not the physical image which we would see at attosecond framing rate.
Wicked
3 / 5 (2) Dec 13, 2011
They could process the video faster if they put the light through a prism and had a few hundred thousand cameras to record it. Also a greater number of photoelectric receptors would allow for a higher speed video.
that_guy
4.3 / 5 (3) Dec 13, 2011
This makes me geekgasm so hard.

I think this was a kind of proof of concept, which is why the 1d stream that needs to be built up in layers.

I think the next step is to create a slit array in order to create full video rather than build it up over multiple shots.

Due to the human factor (Adjusting the camera over and over again) and the time factor (that conditions can change between shots, possibly in unexpected ways), there is a high propensity for artifacts and, in cases, skewed data due to misalignment.

That said, it's absolutely frikkin awesome.

antialias_physorg
5 / 5 (2) Dec 13, 2011
Must require insane amounts of memory though, like even half of a billionth of a second would be 500 frames.

the way I understand it they generate 500 points per pulse and then rotate the mirrors for the next shot. So effectively they are taking the equivalent of 500 images over the entire sequence. That's not too much in terms of data. The reconstruction, however, is probably a bitch.

Why are we seeing the light?

Bright light gets scattered in water to some degree. What you are seeing are the scattered photons at each point the pulse travels through the water.
(More precisely: The individual images are the summation over the scattered photons of many pulses at a certain time after the pulse is fired)
Wicked
1 / 5 (3) Dec 13, 2011
Frames per second is probably only limited hypothetically at around 10^-44 seconds.

Of course, the problem would be designing some thing capable to detecting that, which would clearly require the absolute limits of nano-technology.


Even with nano-sized photoelectric receptors, you would have to build a camera the size of the solar system or bigger to detect changes on a Planck time-scale. The camera would have to be 1.8*10^31 times bigger than the one used in this experiment if you used same sized photoelectric receptors.
Wicked
1 / 5 (1) Dec 13, 2011
Now that I think about it, the size of the camera only allows you to record longer videos. The video shows light moving all the way across the bottle. In order to show light moving the same distance at Planck time resolution you would need an astronomically large camera, but if you only wanted to record a video that was a fraction of the length of this one you could do it with a normal sized camera and just vary the electric field more rapidly.
lomed
not rated yet Dec 13, 2011
Particles of light photons enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit.
Forgive my nitpicking, but light (photons) does not have an electric charge, so it cannot be deflected by electric or magnetic fields. The wikipedia page "Streak camera" indicates that the instrument first converts the energy of the photons into the movement of large numbers of electrons, which can be influenced as indicated by an electric field.
Myno
not rated yet Dec 13, 2011
"Particles of light photons enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit."

One assumes there is some photonic crystal or some such that responds to the electric field in a manner that achieves the deflection, because photons don't bend in electric fields.
Isaacsname
Dec 13, 2011
This comment has been removed by a moderator.
Vendicar_Decarian
1 / 5 (3) Dec 13, 2011
How then do you explain that electrons - which are nothing but a packet of electromagnetic field manages to detect photons?

"Forgive my nitpicking, but light (photons) does not have an electric charge, so it cannot be deflected by electric or magnetic fields." - lomed

You are forgiven.
SeeShells
not rated yet Dec 13, 2011
I used a Hamamatsu Photonics Streak camera in the design of a imaging system that would grab an image of the packets flying by in the Super Conductor Super Collider back in the 90's. This was used to assure that the packet was correct before accelerating the rest of the way in the larger ring. Love this technology and love the way this team pushed the technology to the limit. My hat is off to the team at MIT! Nice work!
Vendicar_Decarian
Dec 13, 2011
This comment has been removed by a moderator.
CubedBert
not rated yet Dec 14, 2011
This is quite intriguing. I would like to see light pass through a prism in this manner.
CHollman82
1 / 5 (2) Dec 14, 2011
How is this possible?

i thought light speed would have limited the fps to under 300 million.

Gotta admit though it is cool.


They interleave the data, not only between multiple cameras, but between multiple pulses.
CHollman82
1 / 5 (2) Dec 14, 2011
It's really cool and all, but operating at the trillionth of a second (picosecond) scale is nothing new. The instruments I work on collect optical information from an optical fiber and an avalanche photodiode at a rate of 10's of picoseconds, and this is in a handheld battery operated instrument.
plasticpower
4 / 5 (4) Dec 14, 2011
I don't think you guys read the article correctly. It doesn't record an insane amount of frames. In fact it takes over an hour of recording to get that one shot of light though a bottle. Rather than taking a ton of images over an infinitely small amount of time, the camera is instead synched with the laser pulse and the actual video you see isn't a slow-mo shot of the same photons, but a number of shots of different photons. Uhm.. better way to explain this would be if you imagine a stream of water from a faucet in the dark with a pulsing strobe light - you can make it appear as if the water is falling in slow-mo and you wouldn't need a camera for that. Like so:
http://www.youtub...lQTmx1LE

Still, this actually has a ton of very important applications, and quite an engineering (and imagineering) feat!
Shelgeyr
1 / 5 (2) Dec 14, 2011
Particles of light photons enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit. Because the electric field is changing very rapidly, it deflects late-arriving photons more than it does early-arriving ones.


That's pretty cool. Wonderful implementation of the technology. Given that it works so well here (loved the video), we should probably reexamine the need (or IMHO lack thereof) of the concept of "gravitational lensing".

I think what we've got here is a fantastic practical application of a fundamental physical law (one that I believe EU theory relies on as well, if I'm not mistaken) that if extrapolated to a large enough scale indicates that what we call "gravitational lensing" is probably nothing of the sort.
that_guy
5 / 5 (1) Dec 14, 2011
@shelgyr - As it currently stands, this experiment is independent of gravitational lensing. It uses a completely different process, and a different set of equations.

This experiment was possible BECAUSE our current physics theory is reliable. Also, in principle it's something we've been doing for a while with some xray diffraction techniques. This is just a very clever application of it.

gravitational lensing works on a different principle, and is independently supported by many other observations - including observations of our own sun. Experiments have shown that our line of sight around the sun bends ever so slightly so that we can see a tiny amount more around the sun than we would be able to without gravitational lensing.

Also, the theories involved in grav lensing have been proven in many other ways as well.
lomed
not rated yet Dec 14, 2011
How then do you explain that electrons - which are nothing but a packet of electromagnetic field manages to detect photons?
Electrons have charge, so they are affected by electric and magnetic fields. Since light (at least classically) is made up of electric and magnetic fields electrons can interact with light. Classically, any interaction of photons with other photons would require a charged intermediary (the electrons in the atoms of a non-linear optical crystal for example). Even non-classically, the only other way that occurs to me for them to interact is by gravity.
CHollman82
2.3 / 5 (3) Dec 14, 2011
I don't think you guys read the article correctly. It doesn't record an insane amount of frames. In fact it takes over an hour of recording to get that one shot of light though a bottle. Rather than taking a ton of images over an infinitely small amount of time, the camera is instead synched with the laser pulse and the actual video you see isn't a slow-mo shot of the same photons, but a number of shots of different photons. Uhm.. better way to explain this would be if you imagine a stream of water from a faucet in the dark with a pulsing strobe light - you can make it appear as if the water is falling in slow-mo and you wouldn't need a camera for that. Like so:
http://www.youtub...lQTmx1LE

Still, this actually has a ton of very important applications, and quite an engineering (and imagineering) feat!


Yes, like I said, this is done by interleaving data sets...
that_guy
not rated yet Dec 15, 2011
Frames per second is probably only limited hypothetically at around 10^-44 seconds.

Of course, the problem would be designing some thing capable to detecting that, which would clearly require the absolute limits of nano-technology.


Even with nano-sized photoelectric receptors, you would have to build a camera the size of the solar system or bigger to detect changes on a Planck time-scale. The camera would have to be 1.8*10^31 times bigger than the one used in this experiment if you used same sized photoelectric receptors.

I wonder if this guy read the article before commenting?

Let me help some of you who are not understanding this concept.

This system does not work like a typical camera. It converts a stream of photons into a string of electrical signals. The timing is determined by the order of arrival. The position is determined by the properties of each signal.
Vendicar_Decarian
1 / 5 (2) Dec 17, 2011
I am concerned about the reflections seen along the interior surface of the bottle.

They should not be there.
purringrumba
not rated yet Dec 19, 2011
I don't think you guys read the article correctly.


Yeah, the responses to this article is a good sampling of how many posters actually understand the scientific content of the articles in general.
SpTecnico82
not rated yet Dec 20, 2011
Very cool, but just one question. Why don't remove the red label from the bottle?