Making computer animation more agile, acrobatic—and realistic

April 10, 2018, University of California - Berkeley
UC Berkeley computer scientists developed an algorithm that uses reinforcement learning to generate realistic simulations of human and animal motion, such as this real-time backflip. The same algorithm works for 25 acrobatic and dance tricks, with one month of learning required per skill. Credit: Jason Peng, UC Berkeley

It's still easy to tell computer-simulated motions from the real thing - on the big screen or in video games, simulated humans and animals often move clumsily, without the rhythm and fluidity of their real-world counterparts.

But that's changing. University of California, Berkeley researchers have now made a major advance in realistic , using deep reinforcement learning to recreate natural motions, even for acrobatic feats like break dancing and martial arts. The simulated characters can also respond naturally to changes in the environment, such as recovering from tripping or being pelted by projectiles.

"This is actually a pretty big leap from what has been done with deep learning and animation. In the past, a lot of work has gone into simulating natural motions, but these physics-based methods tend to be very specialized; they're not general methods that can handle a large variety of skills," said UC Berkeley graduate student Xue Bin "Jason" Peng. Each activity or task typically requires its own custom-designed controller.

"We developed more capable agents that behave in a natural manner," he said. "If you compare our results to motion-capture recorded from humans, we are getting to the point where it is pretty difficult to distinguish the two, to tell what is simulation and what is real. We're moving toward a virtual stuntman."

The work could also inspire the development of more dynamic motor skills for robots.

A paper describing the development has been conditionally accepted for presentation at the 2018 SIGGRAPH conference in August in Vancouver, Canada, and was posted online April 10. Peng's colleagues in the Department of Electrical Engineering and Computer Sciences are professor Pieter Abbeel and assistant professor Sergey Levine, along with Michiel van de Panne of the University of British Columbia.

Mocap for DeepMimic

Traditional techniques in animation typically require designing custom controllers by hand for every skill: one controller for walking, for example, and another for running, flips and other movements. These hand-designed controllers can look pretty good, Peng said.

Alternatively, deep reinforcement learning methods, such as GAIL, can simulate a variety of different skills using a single general algorithm, but their results often look very unnatural.

UC Berkeley researchers created a virtual stuntman that could make computer-animated characters more lifelike. Credit: UC Berkeley video by Roxanne Makasdjian and Stephen McNally, with simulation footage by Jason Peng

"The advantage of our work," Peng said, "is that we can get the best of both worlds. We have a single algorithm that can learn a variety of different skills, and produce motions that rival if not surpass the state of the art in animation with handcrafted controllers."

To achieve this, Peng obtained reference data from motion-capture (mocap) clips demonstrating more than 25 different acrobatic feats, such as backflips, cartwheels, kip-ups and vaults, as well as simple running, throwing and jumping. After providing the mocap data to the computer, the team then allowed the system - dubbed DeepMimic - to "practice" each skill for about a month of simulated time, a bit longer than a human might take to learn the same skill.

The computer practiced 24/7, going through millions of trials to learn how to realistically simulate each skill. It learned through trial and error: comparing its performance after each trial to the mocap data, and tweaking its behavior to more closely match the human motion.

"The machine is learning these skills completely from scratch, before it even knows how to walk or run, so a month might not be too unreasonable," he said.

The key was allowing the machine to learn in ways that humans don't. For example, a backflip involves so many individual body movements that a machine might keep falling and never get past the first few steps. Instead, the algorithm starts learning at various stages of the backflip - including in mid-air - so as to learn each stage of the motion separately and then stitch them together.

Surprisingly, once trained, the simulated characters are able to deal with and recover from never-before-seen conditions: running over irregular terrain and doing spin-kicks while being pelted by projectiles.

"The recoveries come for free from the learning process," Peng said.

And the same simple method worked for all of the more than 25 skills.

"When we first started, we thought we would try something simple, as a baseline for later methods, not expecting that it was going to work. But the very simple method actually works really well. This shows that a simple approach can actually learn a very rich repertoire of highly dynamic and acrobatic skills."

Explore further: Breakthrough software teaches computer characters to walk, run, even play soccer

Related Stories

Robots that can learn like humans

April 9, 2018

Researchers say that artificial intelligence (AI) is now superior to human intelligence in supervised learning using vast amounts of labeled data to perform specific tasks. However, it is considered difficult to realize human-like ...

Startup to train robots like puppets

November 8, 2017

Robots today must be programmed by writing computer code, but imagine donning a VR headset and virtually guiding a robot through a task, like you would move the arms of a puppet, and then letting the robot take it from there.

New robots can see into their future

December 4, 2017

University of California, Berkeley, researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered ...

Researchers develop new algorithms to train robots

February 2, 2018

Researchers at the U.S. Army Research Laboratory and the University of Texas at Austin have developed new techniques for robots or computer programs to learn how to perform tasks by interacting with a human instructor. The ...

Recommended for you

Google braces for huge EU fine over Android

July 18, 2018

Google prepared Wednesday to be hit with huge EU fine for freezing out rivals of its Android mobile phone system in a ruling that could spark new tensions between Brussels and Washington.

EU set to fine Google billions over Android: sources

July 17, 2018

The EU is set to fine US internet giant Google several billion euros this week for freezing out rivals of its Android mobile phone system, sources said, in a ruling that risks fresh tensions with Washington.

2 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

TheGreatCthulhu
5 / 5 (1) Apr 11, 2018
Could you please add a link to the original article as there are further references there that may be of interest to your readership (like, a reference to the associated academic paper).

[Removed the link; sorry if double posting, your spam filter warning doesn't indicate clearly if the message was sent or not.]
Da Schneib
not rated yet Apr 15, 2018
@Cthulhu, the "Feedback to Editors" button will get to the staff. They don't generally read the comment threads.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.