New motion tracking technology is extremely precise, inexpensive with minimal lag

Oct 07, 2013
Lumitrack enables high-speed, high-precision tracking by projecting a barcode-like pattern, or m-sequence, over an area. A sensor can determine its position based on the unique portion of the m-sequence pattern it can detect. Credit: Carnegie Mellon University

Researchers at Carnegie Mellon University and Disney Research Pittsburgh have devised a motion tracking technology that could eliminate much of the annoying lag that occurs in existing video game systems that use motion tracking, while also being extremely precise and highly affordable.

Called Lumitrack, the technology has two components—projectors and sensors. A structured pattern, which looks something like a very large barcode, is projected over the area to be tracked. Sensor units, either near the projector or on the person or object being tracked, can then quickly and precisely locate movements anywhere in that area.

"What Lumitrack brings to the table is, first, low latency," said Robert Xiao, a Ph.D. student in Carnegie Mellon's Human-Computer Interaction Institute (HCII). "Motion tracking has added a compelling dimension to popular game systems, but there's always a lag between the player's movements and the movements of the avatar in the game. Lumitrack is substantially faster than these consumer systems, with near real-time response."

Xiao said Lumitrack also is extremely precise, with sub-millimeter accuracy. Moreover, this performance is achieved at low cost. The sensors require little power and would be inexpensive to assemble in volume. The components could even be integrated into mobile devices, such as smartphones.

Xiao and his collaborators will present their findings at UIST 2013, the Association for Computing Machinery's Symposium on User Interface Software and Technology, Oct. 8-11 in St. Andrews, Scotland. Scott Hudson, professor of HCII, and Chris Harrison, a recent Ph.D. graduate of the HCII who will be joining the faculty next year, are co-authors, as are Disney Research Pittsburgh's Ivan Poupyrev, director of the Interactions Group, and Karl Willis.

Many approaches exist for tracking human motion, including expensive, highly precise systems used to create computer-generated imagery (CGI) for films. Though Lumitrack's developers have targeted games as an initial application, the technology's combination of low latency, high precision and low cost make it suitable for many applications, including CGI and human-robot interaction.

"We think the core technology is potentially transformative and that you could think of many more things to do with it besides games," Poupyrev said.

A key to Lumitrack is the structured pattern that is projected over the tracking area. Called a binary m-sequence, the series of bars encodes a series of bits in which every sequence of seven bits appears only once. A simple optical sensor can thus quickly determine where it is based on which part of the sequence it sees. When two m-sequences are projected at right angles to each other, the sensor can determine its position in two dimensions; when multiple sensors are used, 3D motion tracking is possible.

Explore further: Disney researchers harvest energy from rubbing, tapping paper-like material

More information: chrisharrison.net/index.php/Research/Lumitrack

Related Stories

Recommended for you

Intel says world's smallest 3G modem has been launched

2 hours ago

Analysts say why not. Intel is going after its own comfortable stake in the mobile market, where connectivity for wearables and "Internet of Things" household items will be in high demand. Intel on Tuesday ...

User comments : 2

Adjust slider to filter visible comments by rank

Display comments: newest first

grondilu
5 / 5 (1) Oct 07, 2013
Looks like a perfect fit for VR!
KBK
1 / 5 (2) Oct 08, 2013
To simplify:
It's projector and sensor system, so it needs to get back to the PC and alter the engine output,and send the alteration to the GPU, and then render via the given projector,and then, sensed by the eye. The loop time is very likely no better than the tracker system put together by Oculus. (Oculus Rift VR headset)

Maybe there's something I'm missing here, but I see (first look, on paper) no real decrease in latency compared to the Oculus design and implementation. The Oculus sensor has the advantage of reading motion directly, in a close loop self contained programmed 6 axis multi-sensor type system. When positional make's it's way to the rift, it will simply be a functional self contained relatively mobile system.

A system like this could be a type of positional augmentation (re-centering, near DC drift fix) and stability, but i don't see much beyond that. (in the case of something like VR)

Besides, the intent/application appears to be different.