Six months of computing time generates detailed portrait of cloth behavior

July 23, 2013

It would be impossible to compute all of the ways a piece of cloth might shift, fold and drape over a moving human figure. But after six months of computation, researchers at Carnegie Mellon University and the University of California, Berkeley, are pretty sure they've simulated almost every important configuration of that cloth.

"I believe our approach generates the most beautiful and realistic cloth of any real-time technique," said Adrien Treuille, associate professor of computer science and robotics at Carnegie Mellon.

To create this cloth database, the team took advantage of the immense available in the cloud, ultimately using 4,554 (CPU) hours to generate 33 gigabytes of data.

Treuille said this presents a for computer graphics, in which it will be possible to provide real-time simulation for virtually any complex phenomenon, whether it's a naturally flowing robe or a team of galloping horses.

Doyub Kim, a former post-doctoral researcher at Carnegie Mellon, will present the team's findings today at SIGGRAPH 2013, the International Conference on Computer Graphics and Interactive Techniques, in Anaheim, Calif.

The video will load shortly

Real-time animations of complex phenomena for video games or other interactive media are challenging. A massive amount of computation is necessary to simulate the behavior of some elements, such as cloth, while good computer models simply don't exist for such things as . Nevertheless, data-driven techniques have made complex animations possible on ordinary computers by pre-computing many possible configurations and motions.

"The criticism of data-driven techniques has always been that you can't pre-compute everything," Treuille said. "Well, that may have been true 10 years ago, but that's not the way the world is anymore."

Today, massive computing power can be accessed online at relatively low cost through services such as Amazon. Even if everything can't be pre-computed, the researchers set out to see just how much was possible by leveraging cloud computing resources.

In the simulations in this study, the researchers focused on secondary cloth effects—how clothing responds to both the human figure wearing the clothes, as well as to the dynamic state of the cloth itself.

To explore this highly complex system, Kim said the researchers developed an iterative technique that continuously samples the cloth motions, automatically detecting areas where data is lacking or where errors occur. For instance, in the study simulations, a human figure wore the cloth as a hooded robe; after some gyrations that caused the hood to fall down, the animation would show the hood popping back onto the figure's head for no apparent reason. The team's algorithm automatically identified the error and explored the dynamics of the system until it was eliminated.

Kim said with many video games now online, it would be possible to use such techniques to continually improve the animation of games. As play progresses and the animation encounters errors or unforeseen motions, it may be possible for a system to automatically explore those dynamics and make necessary additions or corrections.

Though the research yielded a massive database for the cloth effects, Kim said it was possible to use conventional techniques to compress the tens of gigabytes of raw data into tens of megabytes, a more manageable file size that nevertheless preserved the richness of the animation.

Explore further: The next generation of E-ink may be on cloth (w/ video)

More information: ACM Transactions on Graphics, 32(4):xxx:1–7, July 2013. Proceedings of ACM SIGGRAPH 2013, Anaheim.

Related Stories

The next generation of E-ink may be on cloth (w/ video)

May 5, 2011

(PhysOrg.com) -- Most people have become familiar with E-ink through e-readers. Devices, such as the Amazon Kindle and the Nook, have brought a less limited version of the bookstore to the reader. E-ink technology works by ...

Computer scientists synthesize cloth sounds for animation

September 26, 2012

(Phys.org)—Someday, virtual reality may be so well done that you won't be able to tell it's just computer animation. To make that happen we'll have to hear it as well as see it, from dramatic noises like the sound of a ...

Recommended for you

Inferring urban travel patterns from cellphone data

August 29, 2016

In making decisions about infrastructure development and resource allocation, city planners rely on models of how people move through their cities, on foot, in cars, and on public transportation. Those models are largely ...

How machine learning can help with voice disorders

August 29, 2016

There's no human instinct more basic than speech, and yet, for many people, talking can be taxing. 1 in 14 working-age Americans suffer from voice disorders that are often associated with abnormal vocal behaviors - some of ...

Apple issues update after cyber weapon captured

August 26, 2016

Apple iPhone owners on Friday were urged to install a quickly released security update after a sophisticated attack on an Emirati dissident exposed vulnerabilities targeted by cyber arms dealers.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.