Novel machine learning technique for simulating the every day task of dressing

November 20, 2018, Association for Computing Machinery
Get dressed!
Computer scientists from the Georgia Institute of Technology and Google Brain, Google's artificial intelligence research arm, have devised a novel computational method, driven by machine learning techniques, to successfully and realistically simulate the multi-step process of putting on clothes. Credit: SIGGRAPH Asia

Putting on clothes is a daily, mundane task that most of us perform with little or no thought. We may never take into consideration the multiple steps and physical motions involved when we're getting dressed in the mornings. But that is precisely what needs to be explored when attempting to capture the motion of dressing and simulating cloth for computer animation.

Computer scientists from the Georgia Institute of Technology and Google Brain, Google's artificial intelligence research arm, have devised a novel computational method, driven by machine learning techniques, to successfully and realistically simulate the multi-step process of putting on clothes. When dissected, the task of is quite complex, and involves several different physical interactions between the character and his or her clothing, primarily guided by the person's sense of touch.

Creating animation of a character putting on clothing is challenging due to the complex interactions between the character and the simulated garment. Most work in highly constrained character animation deals with static environments which don't react very much to the motion of the character, notes the . In contrast, clothing can respond immediately and drastically to small changes in the position of the body; clothing has the tendency to fold, stick and cling to the body, making haptic, or touch sensation, essential to the task.

Another unique challenge about dressing is that it requires the character to perform a prolonged sequence of motion involving a diverse set of subtasks, such as grasping the front layer of a shirt, tucking a hand into the shirt opening and pushing a hand through a sleeve.

"Dressing seems easy to many of us because we practice it every single day. In reality, the dynamics of cloth make it very challenging to learn how to dress from scratch," says Alexander Clegg, lead author of the research and a computer science Ph.D. student at the Georgia Institute of Technology. "We leverage simulation to teach a to accomplish these by breaking the task down into smaller pieces with well-defined goals, allowing the character to try the task thousands of times and providing reward or penalty signals when the character tries beneficial or detrimental changes to its policy."

The researchers' method then updates the neural network one step at a time to make the discovered positive changes more likely to occur in the future. "In this way, we teach the character how to succeed at the task," notes Clegg.

Clegg and his collaborators at Georgia Tech include computer scientists Wenhao Yu, Greg Turk and Karen Liu. Together with Google Brain researcher Jie Tan, the group will present their work at SIGGRAPH Asia 2018 in Tokyo 4 December to 7 December. The annual conference features the most respected technical and creative members in the field of computer graphics and interactive techniques, and showcases leading edge research in science, art, gaming and animation, among other sectors.

In this study, the researchers demonstrated their approach on several dressing tasks: putting on a t-shirt, throwing on a jacket and robot-assisted dressing of a sleeve. With the trained neural network, they were able to achieve complex reenactment of a variety of ways an animated character puts on clothes. Key is incorporating the sense of touch into their framework to overcome the challenges in cloth simulation. The researchers found that careful selection of the cloth observations and the reward functions in their trained network are crucial to the framework's success. As a result, this novel approach not only enables single dressing sequences but a character controller that can successfully dress under various conditions.

"We've opened the door to a new way of animating multi-step interaction tasks in complex environments using reinforcement learning," says Clegg. "There is still plenty of work to be done continuing down this path, allowing simulation to provide experience and practice for task training in a virtual world." In expanding this work, the team is currently collaborating with other researchers in Georgia Tech's Healthcare Robotics lab to investigate the application of robotics for dressing assistance.

Explore further: Robot teaches itself how to dress people

More information: Paper: www.cc.gatech.edu/~aclegg3/pro … ess-synthesizing.pdf

Related Stories

Robot teaches itself how to dress people

May 14, 2018

More than 1 million Americans require daily physical assistance to get dressed because of injury, disease and advanced age. Robots could potentially help, but cloth and the human body are complex.

Animation research moves forward, one wardrobe at a time

August 12, 2015

Animated characters can mimic human behavior extremely well. They can perform jaw-dropping feats of life and death. But there's one trick that digital denizens haven't quite yet mastered: getting dressed and putting their ...

New method enables more realistic hair simulation

April 24, 2017

When a person has a bad hair day, that's unfortunate. When a virtual character has bad hair, an entire animation video or film can look unrealistic. A new innovative method developed by Disney Research makes it possible to ...

Recommended for you

Uber filed paperwork for IPO: report

December 8, 2018

Ride-share company Uber quietly filed paperwork this week for its initial public offering, the Wall Street Journal reported late Friday.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.