Virtual humans, programmed to feel

May 08, 2014 by Angela Herring
Professor Stacy Marsella, who develops computer programs that simulate human emotion across a variety of applications, has joint appointments in the College of Science and the College of Computer and Information Science. Credit: Mariah Tauger.

A clenched fist thumps the air to emphasize a point; a sweeping hand signals the array of possibilities; furrowed eyebrows question the veracity of the politician's remarks. These are all examples of the ways we express our emotions while we converse. They're strategies we may spend a lifetime learning, based on our particular cultures and backgrounds. But that doesn't mean they can't be programmed.

Newly appointed Northeastern professor Stacy Marsella is doing just that. His program, called Cerebella, gives the same ability to convey emotion through and as they communicate with other virtual—or even real—humans.

'Normally these virtual human architectures have some sort of perception, seeing the world, forming some understanding of it, and then deciding how to behave,' said Marsella, who holds joint appointments in the College of Computer and Information Science and the College of Science. "The trouble is some of these things are very hard to model, so sometimes you cheat."

One way to cheat, Marsella explained, is to infer connections between given utterances and appropriate responses. Once the program knows what words a virtual human will use to respond, it can form a library of associated facial expressions, gaze patterns, and gestures that make sense in conjunction with those words.

In one version of the program, Cerebella infers the deeper meaning behind the spoken words. The program is capable of interpreting the meaning and responding appropriately.

In addition to Cerebella, Marsella's work touches on a broad spectrum of applications at the intersection of emotion and technology. For instance, UrbanSim uses similar techniques to generate large-scale models of human populations. Here, virtual models of people aren't doing the same kind of "," as Marsella called it, but they're still interacting with one another and determining follow-up behaviors based on a theory of mind, a model that allows them to reason about how others in the virtual world will act.

"They're abstract social interactions, where agents are either assisting or blocking each other," Marsella explained. The result gives his program the capacity to simulate whole cities for purposes ranging from city planning to military training.

At Northeastern, Marsella is eager to apply his methods to a range of multidisciplinary collaborative projects. In particular, he's interested in working with the personal health informatics team. "The interactive health interventions are the applications that really interest me," he said.

For another project, he designed a training tool for medical students to develop their patient interaction skills, in which they must navigate difficult conversations with a virtual human embedded with the emotional personality of a real human. One task requires the students to inform the virtual human of his cancer diagnosis.

"We want these interactions to be natural," Marsella said, summing up the underlying goal of almost all his programs.

Explore further: Adults with autism virtually learn how to get the job

add to favorites email to friend print save as pdf

Related Stories

Adults with autism virtually learn how to get the job

May 08, 2014

Adults with an autism spectrum disorder, who may have trouble talking about themselves and interacting socially, don't always make good impressions in job interviews and have low employment rates.

Carnegie Mellon group shows iPad skeuomorphism

May 04, 2014

(Phys.org) —The Human Interfaces Group at Carnegie Mellon, led by the group's director Chris Harrison, an assistant professor of Human Computer Interaction, have done work that shows how traditional hand ...

Comforting chatbot

Feb 05, 2014

Chatting with the customer service is now considered normal. But what if 'Eva', 'John' or 'Julia' were capable of not just solving technical problems but also providing us with emotional support? Janneke van der Zwaan investigated ...

Talk to the virtual hands

Oct 12, 2011

Body language of both speaker and listener affects success in virtual reality communication game.

New avatars capable of laughing

Apr 04, 2014

Today's computer-based avatars lack one of our most deeply rooted human characteristics: laughter. Computer scientists have now teamed up with psychologists to give avatars the ability to laugh.

Recommended for you

Cloud computing helps make sense of cloud forests

Dec 17, 2014

The forests that surround Campos do Jordao are among the foggiest places on Earth. With a canopy shrouded in mist much of time, these are the renowned cloud forests of the Brazilian state of São Paulo. It is here that researchers ...

Teaching robots to see

Dec 15, 2014

Syed Saud Naqvi, a PhD student from Pakistan, is working on an algorithm to help computer programmes and robots to view static images in a way that is closer to how humans see.

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.