News Release
Robot Learns to Smile and Frown
San Diego, CA, July 8, 2009 -- A hyper-realistic Einstein robot at the University of California, San Diego has learned to smile and make facial expressions through a process of self-guided learning. The UC San Diego researchers used machine learning to “empower” their robot to learn to make realistic facial expressions.
“As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” said Tingfan Wu, the computer science Ph.D. student from the UC San Diego Jacobs School of Engineering who presented this advance on June 6 at the IEEE International Conference on Development and Learning.
Download the paper at:
http://mplab.ucsd.edu/wp-content/uploads/wu_icdl20091.pdf
Watch the robot make faces in this short YouTube video
The Einstein robot head at UC San Diego performs asymmetric random facial movements as a part of the expression learning process. Watch a YouTube video here. More images of the expression learning robot from Calit2 here. |
This Einstein robot head has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. Today, a highly trained person must manually set up these kinds of realistic robots so that the servos pull in the right combinations to make specific face expressions. In order to begin to automate this process, the UCSD researchers looked to both developmental psychology and machine learning.
Developmental psychologists speculate that infants learn to control their bodies through systematic exploratory movements, including babbling to learn to speak. Initially, these movements appear to be executed in a random manner as infants learn to control their bodies and reach for objects.
“We applied this same idea to the problem of a robot learning to make realistic facial expressions,” said Javier Movellan, the senior author on the paper presented at ICDL 2009 and the director of UCSD’s Machine Perception Laboratory, housed in Calit2, the California Institute for Telecommunications and Information Technology.
Although their preliminary results are promising, the researchers note that some of the learned facial expressions are still awkward. One potential explanation is that their model may be too simple to describe the coupled interactions between facial muscles and skin.
Close-up images of some of the kinds of facial expressions the Einstein robot learned to make through the process of machine learning. |
Once the robot learned the relationship between facial expressions and the muscle movements required to make them, the robot learned to make facial expressions it had never encountered.
For example, the robot learned eyebrow narrowing, which requires the inner eyebrows to move together and the upper eyelids to close a bit to narrow the eye aperture.
“During the experiment, one of the servos burned out due to misconfiguration. We therefore ran the experiment without that servo. We discovered that the model learned to automatically compensate for the missing servo by activating a combination of nearby servos,” the authors wrote in the paper presented at the 2009 IEEE International Conference on Development and Learning.
While the primary goal of this work was to solve the engineering problem of how to approximate the appearance of human facial muscle movements with motors, the researchers say this kind of work could also lead to insights into how humans learn and develop facial expressions.
###
“Learning to Make Facial Expressions,” by Tingfan Wu, Nicholas J. Butko, Paul Ruvulo, Marian S. Bartlett, Javier R. Movellan from Machine Perception Laboratory, University of California San Diego. Presented on June 6 at the 2009 IEEE 8TH INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING.
Download the paper at:
http://mplab.ucsd.edu/wp-content/uploads/wu_icdl20091.pdf
Watch the robot make faces in this short YouTube video
Media Contacts
Daniel Kane
Jacobs School of Engineering
858-534-3262
dbkane@ucsd.edu
Tiffany Fox
Qualcomm Institute
858-246-0353
tfox@ucsd.edu