Team develops new model for animated faces and bodies

Aug 06, 2012

Computer graphic artists who produce computer-animated movies and games spend much time creating subtle movements such as expressions on faces, gesticulations on bodies and the draping of clothes. A new way of modeling these dynamic objects, developed by researchers at Carnegie Mellon University, Disney Research, Pittsburgh, and the LUMS School of Science and Engineering in Pakistan, could greatly simplify this editing process.

Graphics software usually represents dynamic objects, such as an expressive face, as a sequence of shapes, with each composed of a set of points in space. Another way to model an expressive face is to chart each point on the face as it shifts location over time. Each method has its advantages, but the sheer number of possible variations is tremendous, which results in models that are large and difficult to manage.

The Pittsburgh researchers, however, found they could create a model that simultaneously takes into account both space and time — a bilinear spatiotemporal basis model. Though this approach might sound more complex, the researchers found the contrary. The method enabled them to create a much more compact, powerful and easy-to-manage model. For example, they showed that they could reproduce a dynamic sequence, with millimeter precision, after discarding 99 percent of the original data points.

Their findings will be presented Aug. 6 at SIGGRAPH 2012, the International Conference on Computer Graphics and Interactive Techniques, at the Los Angeles Convention Center.

Yaser Sheikh, assistant research professor in Carnegie Mellon's Robotics Institute, explained that the natural constraints on spatial movements, such as the characteristic ways that the face changes shape as someone is talking or expressing an emotion, combine with the natural constraints on how much movement can occur over a given stretch of time. This enables the models to be very compact and efficient.

"Simply put, this lets us do things more sensibly with less work," Sheikh said.

Spatiotemporal data is inherent not only in computer simulations and animations, but in object and camera tracking. So building more efficient models can have a number of practical implications. In motion editing, for instance, the models created with the bilinear spatiotemporal representation make it easy to change one point in space-time — such as bringing the head of a soccer player forward to make contact with a ball — while keeping it consistent with other points in the , said Tomas Simon, a Robotics Institute PhD student and a Disney Research intern.

Likewise, action sequences based on motion capture data often require tedious post processing to fix missing markers, incorrectly labeled markers and other glitches. A sequence that would take two or three hours for a computer graphic artist to process using conventional models could be completed in just a few minutes using the new models, with similar quality, Simon said.

Iain Matthews, senior research scientist at Disney Research, Pittsburgh, said the bilinear spatiotemporal basis models are possible, in part, because today's computers have memories sufficient to process data sets that can include millions of variables. "The ability to interact with large dynamic sequences in data consistent ways and in real-time has lots of interesting applications," he added.

Explore further: Ride-sharing could cut cabs' road time by 30 percent

add to favorites email to friend print save as pdf

Related Stories

Researchers demonstrate markerless motion capture

Aug 06, 2012

Conventional motion capture for film and game production involves multiple cameras and actors festooned with markers. A new technique developed by Disney Research, Pittsburgh, has demonstrated how three-dimensional motion ...

Body-mounted cameras turn motion capture inside out

Aug 08, 2011

Traditional motion capture techniques use cameras to meticulously record the movements of actors inside studios, enabling those movements to be translated into digital models. But by turning the cameras around — mounting ...

Face science meets robot science

Jul 05, 2011

Your brain processes lots of tiny and subtle clues about faces whenever you interact with other people, and now scientists from Queen Mary, University of London and UCL (University College London) are investigating whether ...

Cebit 2012: 3-D animations for everyone

Mar 06, 2012

3D movies like "Toy Story" or "Transformers" are based on everyday objects that are able to move like humans. Such 3D characters are created by skilled experts in time-consuming manual work. Computer scientists ...

Recommended for you

Ride-sharing could cut cabs' road time by 30 percent

23 hours ago

Cellphone apps that find users car rides in real time are exploding in popularity: The car-service company Uber was recently valued at $18 billion, and even as it faces legal wrangles, a number of companies ...

Avatars make the Internet sign to deaf people

Aug 29, 2014

It is challenging for deaf people to learn a sound-based language, since they are physically not able to hear those sounds. Hence, most of them struggle with written language as well as with text reading ...

Chameleon: Cloud computing for computer science

Aug 26, 2014

Cloud computing has changed the way we work, the way we communicate online, even the way we relax at night with a movie. But even as "the cloud" starts to cross over into popular parlance, the full potential ...

User comments : 0