Robot Learns to Smile and Frown (w/ Video)

Robot Learns to Smile and Frown
The Einstein robot head at UC San Diego performs asymmetric random facial movements as a part of the expression learning process.
( -- A hyper-realistic Einstein robot at the University of California, San Diego has learned to smile and make facial expressions through a process of self-guided learning. The UC San Diego researchers used machine learning to “empower” their robot to learn to make realistic facial expressions.

“As far as we know, no other research group has used machine learning to teach a to make realistic facial expressions,” said Tingfan Wu, the computer science Ph.D. student from the UC San Diego Jacobs School of Engineering who presented this advance on June 6 at the IEEE International Conference on Development and Learning.

The faces of robots are increasingly realistic and the number of artificial muscles that controls them is rising. In light of this trend, UC San Diego researchers from the Machine Perception Laboratory are studying the face and head of their robotic Einstein in order to find ways to automate the process of teaching robots to make lifelike facial expressions.

This Einstein robot head has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. Today, a highly trained person must manually set up these kinds of realistic robots so that the servos pull in the right combinations to make specific face expressions. In order to begin to automate this process, the UCSD researchers looked to both developmental psychology and machine learning.

Developmental psychologists speculate that infants learn to control their bodies through systematic exploratory movements, including babbling to learn to speak. Initially, these movements appear to be executed in a random manner as infants learn to control their bodies and reach for objects.

“We applied this same idea to the problem of a robot learning to make realistic facial expressions,” said Javier Movellan, the senior author on the paper presented at ICDL 2009 and the director of UCSD’s Machine Perception Laboratory, housed in Calit2, the California Institute for Telecommunications and Information Technology.

Although their preliminary results are promising, the researchers note that some of the learned facial expressions are still awkward. One potential explanation is that their model may be too simple to describe the coupled interactions between facial muscles and skin.

To begin the learning process, the UC San Diego researchers directed the Einstein robot head (Hanson Robotics’ Head) to twist and turn its face in all directions, a process called “body babbling.” During this period the robot could see itself on a mirror and analyze its own expression using facial expression detection software created at UC San Diego called CERT (Computer Expression Recognition Toolbox). This provided the data necessary for machine learning algorithms to learn a mapping between facial expressions and the movements of the muscle motors.

Once the robot learned the relationship between facial expressions and the muscle movements required to make them, the robot learned to make facial expressions it had never encountered.

Robot Learns to Smile and Frown
Close-up images of some of the facial expressions the UC San Diego researchers exposed their Einstein robot to.

For example, the robot learned eyebrow narrowing, which requires the inner eyebrows to move together and the upper eyelids to close a bit to narrow the eye aperture.

“During the experiment, one of the servos burned out due to misconfiguration. We therefore ran the experiment without that servo. We discovered that the model learned to automatically compensate for the missing servo by activating a combination of nearby servos,” the authors wrote in the paper presented at the 2009 IEEE International Conference on Development and Learning.

“Currently, we are working on a more accurate facial expression generation model as well as systematic way to explore the model space efficiently,” said Wu, the PhD student. Wu also noted that the “body babbling” approach he and his colleagues described in their paper may not be the most efficient way to explore the model of the face.

While the primary goal of this work was to solve the engineering problem of how to approximate the appearance of human facial muscle movements with motors, the researchers say this kind of work could also lead to insights into how humans learn and develop facial expressions.

More information: “Learning to Make ,” by Tingfan Wu, Nicholas J. Butko, Paul Ruvulo, Marian S. Bartlett, Javier R. Movellan from Machine Perception Laboratory, University of California San Diego. Presented on June 6 at the 2009 IEEE 8TH INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING.
Download the paper at:

Provided by University of California - San Diego (news : web)

Explore further

Computer scientist turns his face into a remote control

Citation: Robot Learns to Smile and Frown (w/ Video) (2009, July 8) retrieved 23 May 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Jul 09, 2009
Now that's just creepy. And pretty pointless. Before teaching them how to make silly faces, shouldn't they learn how to walk, manipulate objects or otherwise perform useful tasks?
Also, from an aesthetic point of view, I think artificial/mechanical shapes are far more pleasant to the eye than lifeless dolls.

Jul 10, 2009
Well, kasen, science is not always strictly one-directional. Some scientists are best at giving robots expressions, if they know how to do it now, why should they wait for tomorrow? This is how progress is done-people do their best no matter of the immediate application and one day, somebody figure a way to apply what's done and to do it for profit.
But I also find the expressions to be little creepy.

Jul 22, 2009
The making of facial expressions may "seem" useles at this point, but I think it is not... Think of a robot that has the machine learning capability and doing these facial expressions. It will give better feedback from the humans closeby thus increasing its learning efficiency. These all can be combined with projects such as the ones mimicking the infant learning. See here: http://www.roboti...-ai.html

Jul 22, 2009
I'm not objecting to improving an AI's capacity to learn. That's all fine and dandy, and quite important, it's just that that I don't believe that developing that capacity is tied to developing secondary human characteristics, like bipedal walking, expression, or skiing, as I've seen recently. Just put some wheels on it and have it learn to negotiate an obstacle course, or why not have it observe a bee and learn how to fly? As for the proposed bonus of understanding the development of said human characteristics, I'm pretty sure there are other, perhaps more efficient, methods.
It's just that people are self-obsessed and have this god-complex idea that our creations have to be in our image. I find that fantastically idiotic, wasteful and, ultimately, potentially dangerous.
"This is how progress is done-people do their best no matter of the immediate application and one day, somebody figure a way to apply what's done and to do it for profit."
Ah, profit! Such a noble aspiration...Granted, a sex doll with complex facial expressions, and hence a better ability to fake it, would probably sell like crazy.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more