Computing with a wave of the hand (w/ Video)

Dec 11, 2009 by Larry Hardesty
Media Lab researchers demonstrate a laboratory mockup of a thin-screen LCD display with built-in optical sensors. Photo: Matthew Hirsch, Douglas Lanman, Ramesh Raskar, Henry Holtzman

(PhysOrg.com) -- The iPhone’s familiar touch screen display uses capacitive sensing, where the proximity of a finger disrupts the electrical connection between sensors in the screen. A competing approach, which uses embedded optical sensors to track the movement of the user’s fingers, is just now coming to market. But researchers at MIT’s Media Lab have already figured out how to use such sensors to turn displays into giant lensless cameras. On Dec. 19 at Siggraph Asia -- a recent spinoff of Siggraph, the premier graphics research conference -- the MIT team is presenting the first application of its work, a display that lets users manipulate on-screen images using hand gestures.

Many other researchers have been working on such gestural interfaces, which would, for example, allow computer users to drag windows around a screen simply by pointing at them and moving their fingers, or to rotate a virtual object through with a flick of the wrist. Some large-scale gestural interfaces have already been commercialized, such as those developed by the Media Lab’s Hiroshi Ishii, whose work was the basis for the system that Tom Cruise’s character uses in the movie Minority Report.

But “those usually involve having a roomful of expensive cameras or wearing tracking tags on your fingers,” says Matthew Hirsch, a PhD candidate at the Media Lab who, along with Media Lab professors Ramesh Raskar and Henry Holtzman and visiting researcher Douglas Lanman, developed the new display. Some experimental systems — such as Microsoft’s Natal — instead use small cameras embedded in a display to capture gestural information. But because the cameras are offset from the center of the screen, they don’t work well at short distances, and they can’t provide a seamless transition from gestural to interactions. Cameras set far enough behind the screen can provide that transition, as they do in Microsoft’s SecondLight, but they add to the display’s thickness and require costly hardware to render the screen alternately transparent and opaque. “The goal with this is to be able to incorporate the gestural display into a thin LCD device” — like a cell phone — “and to be able to do it without wearing gloves or anything like that,” Hirsch says.

The Media Lab system requires an array of liquid crystals, as in an ordinary LCD display, with an array of optical sensors right behind it. The liquid crystals serve, in a sense, as a lens, displaying a black-and-white pattern that lets light through to the sensors. But that pattern alternates so rapidly with whatever the LCD is otherwise displaying — the list of apps on a smart phone, for instance, or the virtual world of a video game — that the viewer never notices it.

The simplest way to explain how the system works, Lanman says, is to imagine that, instead of an LCD, an array of pinholes is placed in front of the sensors. Light passing through each pinhole will strike a small block of sensors, producing a low-resolution image. Since each pinhole image is taken from a slightly different position, all the images together provide a good deal of depth information about whatever lies before the screen. An array of liquid crystals could simulate a sheet of pinholes simply by displaying a pattern in which, say, the central pixel in each 19-by-19 block of pixels is white (transparent) while all the others are black.

The problem with pinholes, Lanman explains, is that they allow very little light to reach the sensors, so they require exposure times that are too long to be practical. So the LCD instead displays a pattern in which each 19-by-19 block is subdivided into a regular pattern of black-and-white rectangles of different sizes. Since there are as many white squares as black, the blocks pass much more light.

The 19-by-19 blocks are all adjacent to each other, however, so the images they pass to the sensors overlap in a confusing jumble. But the pattern of black-and-white squares allows the system to computationally disentangle the images, capturing the same depth information that a pinhole array would, but capturing it much more quickly.

LCDs with built-in optical sensors are so new that the Media Lab researchers haven’t been able to procure any yet, but they mocked up a display in the lab to test their approach. Like some existing touch screen systems, the mockup uses a camera some distance from the screen to record the images that pass through the blocks of black-and-white squares. But it provides a way to determine whether the algorithms that control the system would work in a real-world setting. In experiments in the lab, the researchers showed that they could manipulate on-screen objects using hand gestures and move seamlessly between gestural control and ordinary touch screen interactions.

Of the current crop of experimental gestural interfaces, “I like this one because it’s really integrated into the display,” says Paul Debevec, director of the Graphics Laboratory at the University of Southern California's Institute for Creative Technologies, whose doctoral thesis led to the innovative visual effects in the movie The Matrix. “Everyone needs to have a display anyway. And it is much better than just figuring out just where the fingertips are or a kind of motion-capture situation. It’s really a full three-dimensional image of the person’s hand that’s in front of the display.”

Indeed, the researchers are already exploring the possibility of using the new system to turn the display into a high-resolution camera. Instead of capturing low-resolution three-dimensional images, a different pattern of black-and-white squares could capture a two-dimensional image at a specific focal depth. Since the resolution of that image would be proportional to the number of sensors embedded in the screen, it could be much higher than that of the images captured by a conventional webcam.

Darkening all but the central pixel in a 19-by-19 block turns an array of liquid crystals into a pinhole camera; but a pattern of black-and-white rectangles of different sizes passes much more light while providing a way to computationally disentangle overlapping images. Diagrams: Matthew Hirsch, Douglas Lanman, Ramesh Raskar, Henry Holtzman

Raskar, who directs the Media Lab’s Camera Culture Group, stresses that the work has even broader implications than simply converting displays into cameras. In the history of computation, he says, “intelligence moved from the mainframe to the desktop to the mobile device, and now it’s moving into the screen.” The idea that “every pixel has a computer behind it,” he says, offers opportunities to reimagine how humans and computers interact.

“It’s kind of the hallmark of a lot of Ramesh’s work,” says Debevec. “He comes up with crazy cameras with the guts hanging out of them and strange arrangements of different mechanics in something that at first you’re wondering, ‘Well, why would you do that?’ No one quite does things the way that he does because no one else thinks the way he does. Then you start to understand it and you realize that there’s actually a very interesting new thing happening.”

Provided by Massachusetts Institute of Technology (news : web)

Explore further: Magic Leap moves beyond older lines of VR

add to favorites email to friend print save as pdf

Related Stories

Forest ecologist sees climate consequences

Sep 14, 2009

Many people worry about the link between rising bark-beetle infestations and an increase in western wildfires. But Dr. Susan Prichard, a Research Scientist at the University of Washington, adds another concern: ...

Giving cockroaches the slip (w/ Video)

Oct 13, 2009

(PhysOrg.com) -- A breakthrough by scientists at Cambridge University may terminate the threat of termites, cockroaches and other pests such as ants and locusts - responsible for billions of pounds worth of ...

Foldable phone opens into large OLED screen

Nov 24, 2008

(PhysOrg.com) -- A new cell phone developed by Samsung opens like a book to reveal a larger OLED screen, essentially turning the phone into a portable media player. Samsung recently demonstrated the prototype ...

Sony Unveils 360-Degree 3D Display (w/ Video)

Oct 22, 2009

(PhysOrg.com) -- Today at the DC Expo in Tokyo, Sony has introduced a new 3D display that can be viewed from any direction. Unlike many 3D displays, the new display does not require glasses to view the 3D ...

Recommended for you

Magic Leap moves beyond older lines of VR

Oct 24, 2014

Two messages from Magic Leap: Most of us know that a world with dragons and unicorns, elves and fairies is just a better world. The other message: Technology can be mindboggingly awesome. When the two ...

Oculus Rift users to see Moon live through robot

Oct 23, 2014

A group from Carnegie Mellon wants to send a robot to the Moon to beam live pictures of the Moon to Oculus Rift headset users, reported technology reporter Jane Wakefield of the BBC. Andy the robot is intended ...

Skin icons can tap into promise of smartwatch

Oct 21, 2014

You have heard it before: smartwatches are cool wearables but critics remind us of the fact that their small size makes many actions cumbersome and they question how many people will really have them on their ...

User comments : 7

Adjust slider to filter visible comments by rank

Display comments: newest first

Mercury_01
5 / 5 (2) Dec 11, 2009
Whats wrong with a mouse? I dont even have to move my hand to use it.
SincerelyTwo
Dec 11, 2009
This comment has been removed by a moderator.
danman5000
3.5 / 5 (2) Dec 11, 2009
Also a full gesture system would do away with Carpel Tunnel Syndrome, assuming the gestures are a more natural motion for the hands and aren't themselves too repetitive.
pauljpease
not rated yet Dec 11, 2009
Now they just need to image our eyes so that we can navigate our software, e.g. activate an icon, merely by looking at it. Much easier than hand waving. Have you tried to hold your hand in front of your screen for a minute or two? It gets tiring. The mouse is superior because your arm can rest on your desk. This hand gesturing might not be useful for your day job...
antialias_physorg
not rated yet Dec 11, 2009
But it's hard to decide when the eyes are trying to activate/grab an item or merely glancing around the desktop.

Gestures would be much more intuitive for manipulating data - especially in 3D.
Expiorer
Dec 12, 2009
This comment has been removed by a moderator.
flaredone
not rated yet Dec 12, 2009
.. Have you tried to hold your hand in front of your screen for a minute or two? It gets tiring. ...
In another words, this technology is useless as a prevention of Carpet tunnel syndrome, but it could be great for remote control of presentations, where mouse becomes too difficult to handle.
Mercury_01
not rated yet Dec 12, 2009
I could use it while DJing, where it can be somewhat difficult to transition between manipulating records and mixers to operating a mouse and back again constantly. But at home, give me a decent mouse any day.
TJ_alberta
Dec 13, 2009
This comment has been removed by a moderator.
Ohmaar
not rated yet Dec 14, 2009
I have to wonder if MIT isn't simply working out the technical details for Apple, since they filed for a patent on this process back in 2007.

http://bit.ly/6G4oyj