Body-mounted cameras turn motion capture inside out

Aug 08, 2011

Traditional motion capture techniques use cameras to meticulously record the movements of actors inside studios, enabling those movements to be translated into digital models. But by turning the cameras around — mounting almost two dozen, outward-facing cameras on the actors themselves — scientists at Disney Research, Pittsburgh (DRP), and Carnegie Mellon University (CMU) have shown that motion capture can occur almost anywhere — in natural environments, over large areas and outdoors.

Motion capture makes possible scenes such as those in "Pirates of the Caribbean: Dead Man's Chest," where the movements of actor Bill Nighy were translated into a digitally created Davy Jones with octopus-like tentacles forming his beard. But body-mounted cameras enable capture of motions, such as running outside or swinging on monkey bars, that would be difficult — if not impossible — otherwise, said Takaaki Shiratori, a post-doctoral associate at DRP.

"This could be the future of ," said Shiratori, who will make a presentation about the new technique today (Aug. 8) at SIGGRAPH 2011, the International Conference on Computer Graphics and Interactive Techniques in Vancouver. As video cameras become ever smaller and cheaper, "I think anyone will be able to do motion capture in the not-so-distant future," he said.

Other researchers on the project include Jessica Hodgins, DRP director and a CMU professor of robotics and computer science; Hyun Soo Park, a Ph.D. student in mechanical engineering at CMU; Leonid Sigal, DRP researcher; and Yaser Sheikh, assistant research professor in CMU's Robotics Institute.

The wearable system makes it possible to reconstruct the relative and global motions of an actor thanks to a process called structure from motion (SfM). Takeo Kanade, a CMU professor of computer science and robotics and a pioneer in computer vision, developed SfM 20 years ago as a means of determining the three-dimensional structure of an object by analyzing the images from a camera as it moves around the object, or as the object moves past the camera.

In this application, SfM is not used primarily to analyze objects in a person's surroundings, but to estimate the pose of the cameras on the person. Researchers used Velcro to mount 20 lightweight cameras on the limbs, and trunk of each subject. Each camera was calibrated with respect to a reference structure. Each person then performed a range-of-motion exercise that allowed the system to automatically build a digital skeleton and estimate positions of cameras with respect to that skeleton.

SfM is used to estimate rough position and orientation of limbs as the actor moves through an environment and to collect sparse 3D information about the environment that can provide context for the captured motion. The rough position and orientation of limbs serves as an initial guess for a refinement step that optimizes the configuration of the body and its location in the environment, resulting in the final motion capture result.

The quality of motion capture from body-mounted cameras does not yet match the fidelity of traditional motion capture, Shiratori said, but will improve as the resolution of small video cameras continues to improve.

The technique requires a significant amount of computational power; a minute of motion capture now can require an entire day to process. Future work will include efforts to find computational shortcuts, such as performing many of the steps simultaneously through parallel processing.

Explore further: For Google's self-driving cars, learning to deal with the bizarre is essential

More information: drp.disneyresearch.com/projects/mocap/

Related Stories

Ski Faster with Camera-less Fusion Motion Capture

Jul 01, 2008

Professional skiers can now learn how to ski faster with the aid of a new system used to capture 3D motion of athletic movements – Fusion Motion Capture (FMC). Featured in Wiley-Blackwell’s journal, Sports Technology, this i ...

Motion-capture helping reveal how kangaroos hop

Mar 11, 2011

(PhysOrg.com) -- Scientists in Australia, the UK and US have for the first time used infrared motion capture technology outdoors to work out how kangaroos distribute their weight and the forces as they hop ...

Winging it – bird watching with a difference

Apr 03, 2006

If you enjoy wildlife programmes then you'll probably have seen bird's-eye view footage of flying, taken from cameras attached to birds. A research group from the University of Oxford has gone one step further: by attaching ...

Oscar Worthy Science And Engineering

Feb 18, 2010

When audiences watch a movie, they know that what they are seeing is an illusion -- and making the images appear as real as possible can be a major undertaking for any filmmaking team.

Recommended for you

A green data center with an autonomous power supply

3 hours ago

A new data center in the United States is generating electricity for its servers entirely from renewable sources, converting biogas from a sewage treatment plant into electricity and water. Siemens implemented ...

After a data breach, it's consumers left holding the bag

3 hours ago

Shoppers have launched into the holiday buying season and retailers are looking forward to year-end sales that make up almost 20% of their annual receipts. But as you check out at a store or click "purchase" on your online shopping cart ...

Can we create an energy efficient Internet?

4 hours ago

With the number of Internet connected devices rapidly increasing, researchers from Melbourne are starting a new research program to reduce energy consumption of such devices.

Brain inspired data engineering

4 hours ago

What if next-generation ICT systems could be based on the brain's structure and its cognitive and adaptive processes? A groundbreaking paradigm of brain-inspired intelligent ICT architectures is being born.

User comments : 4

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
not rated yet Aug 08, 2011
Mounting the cameras seems difficult because they're bound to move about and point in different directions, or sag and shift on the person wearing them.

Techno1
not rated yet Aug 08, 2011
My God... this is an absolutely stupid way of doing this...

What they should do is use a combination of arrays of cameras mounted throughout the room in a spherical array, combined with a combination of several LIDAR and RADAR devices for 3-d imaging.

This is what gaming consoles are doing. I imagine using several motion devices from gaming consoles in a network along multiple axis to map the "Player" in 3-d.

By using a combination of several lidars and cameras in all axis, you could map the person and their movements in REAL 3-d from all angles simultaneously, getting shape, distance, color, and texture all at the same time.
poof
1 / 5 (1) Aug 08, 2011
Now if you mount an array of 35mm cameras to the actor, you can totally simulate what it would be like to walk on mars.
Nark2
not rated yet Aug 09, 2011
Couldnt you just mount a bunch of high-freq radio emitters on a person then use 3 receivers on the ground to triangulate each emitters position each frame? Would that be a cheap way of doing it?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.