Virtual humans, programmed to feel

May 08, 2014 by Angela Herring
Professor Stacy Marsella, who develops computer programs that simulate human emotion across a variety of applications, has joint appointments in the College of Science and the College of Computer and Information Science. Credit: Mariah Tauger.

A clenched fist thumps the air to emphasize a point; a sweeping hand signals the array of possibilities; furrowed eyebrows question the veracity of the politician's remarks. These are all examples of the ways we express our emotions while we converse. They're strategies we may spend a lifetime learning, based on our particular cultures and backgrounds. But that doesn't mean they can't be programmed.

Newly appointed Northeastern professor Stacy Marsella is doing just that. His program, called Cerebella, gives the same ability to convey emotion through and as they communicate with other virtual—or even real—humans.

'Normally these virtual human architectures have some sort of perception, seeing the world, forming some understanding of it, and then deciding how to behave,' said Marsella, who holds joint appointments in the College of Computer and Information Science and the College of Science. "The trouble is some of these things are very hard to model, so sometimes you cheat."

One way to cheat, Marsella explained, is to infer connections between given utterances and appropriate responses. Once the program knows what words a virtual human will use to respond, it can form a library of associated facial expressions, gaze patterns, and gestures that make sense in conjunction with those words.

In one version of the program, Cerebella infers the deeper meaning behind the spoken words. The program is capable of interpreting the meaning and responding appropriately.

In addition to Cerebella, Marsella's work touches on a broad spectrum of applications at the intersection of emotion and technology. For instance, UrbanSim uses similar techniques to generate large-scale models of human populations. Here, virtual models of people aren't doing the same kind of "," as Marsella called it, but they're still interacting with one another and determining follow-up behaviors based on a theory of mind, a model that allows them to reason about how others in the virtual world will act.

"They're abstract social interactions, where agents are either assisting or blocking each other," Marsella explained. The result gives his program the capacity to simulate whole cities for purposes ranging from city planning to military training.

At Northeastern, Marsella is eager to apply his methods to a range of multidisciplinary collaborative projects. In particular, he's interested in working with the personal health informatics team. "The interactive health interventions are the applications that really interest me," he said.

For another project, he designed a training tool for medical students to develop their patient interaction skills, in which they must navigate difficult conversations with a virtual human embedded with the emotional personality of a real human. One task requires the students to inform the virtual human of his cancer diagnosis.

"We want these interactions to be natural," Marsella said, summing up the underlying goal of almost all his programs.

Explore further: Adults with autism virtually learn how to get the job

add to favorites email to friend print save as pdf

Related Stories

Adults with autism virtually learn how to get the job

May 08, 2014

Adults with an autism spectrum disorder, who may have trouble talking about themselves and interacting socially, don't always make good impressions in job interviews and have low employment rates.

Carnegie Mellon group shows iPad skeuomorphism

May 04, 2014

(Phys.org) —The Human Interfaces Group at Carnegie Mellon, led by the group's director Chris Harrison, an assistant professor of Human Computer Interaction, have done work that shows how traditional hand ...

Comforting chatbot

Feb 05, 2014

Chatting with the customer service is now considered normal. But what if 'Eva', 'John' or 'Julia' were capable of not just solving technical problems but also providing us with emotional support? Janneke van der Zwaan investigated ...

Talk to the virtual hands

Oct 12, 2011

Body language of both speaker and listener affects success in virtual reality communication game.

New avatars capable of laughing

Apr 04, 2014

Today's computer-based avatars lack one of our most deeply rooted human characteristics: laughter. Computer scientists have now teamed up with psychologists to give avatars the ability to laugh.

Recommended for you

Artificial intelligence that imitates children's learning

9 hours ago

The computer programmes used in the field of artificial intelligence (AI) are highly specialised. They can for example fly airplanes, play chess or assemble cars in controlled industrial environments. However, a research ...

Oculus unveils new prototype VR headset

Sep 20, 2014

Oculus has unveiled a new prototype of its virtual reality headset. However, the VR company still isn't ready to release a consumer edition.

Who drives Alibaba's Taobao traffic—buyers or sellers?

Sep 18, 2014

As Chinese e-commerce firm Alibaba prepares for what could be the biggest IPO in history, University of Michigan professor Puneet Manchanda dug into its Taobao website data to help solve a lingering chicken-and-egg question.

Computerized emotion detector

Sep 16, 2014

Face recognition software measures various parameters in a mug shot, such as the distance between the person's eyes, the height from lip to top of their nose and various other metrics and then compares it with photos of people ...

User comments : 0