Can computers one day understand emotions? New patent paves the way

March 28, 2017
James Wang and Reginald Adams discussing a new patent that takes the next step in computer learning techniques in the hopes that computers can one day understand the complex realm of human feelings. Credit: Jessica Sallurday

A new patent awarded to a Penn State team led by James Wang, professor in the College of Information Sciences and Technology; Reginald Adams, associate professor of psychology in the College of the Liberal Arts; Jia Li, professor of statistics in the Eberly College of Science; and Michelle Newman, professor of psychology, takes the next step in computer learning techniques in the hopes that computers can one day understand the complex realm of human feelings.

"People commonly believe computers can't understand emotions," Wang said. "With this project, we are showing that computers can harness human-generated data and then acquire the ability to understand emotions."

The patent, "Automatically computing emotions aroused from images through shape modeling (9,558,425)," is an expansion of previous research in the domain, which has largely focused on color, texture and other low-level features found in images. For instance, it has been well-documented that red is associated with anger. But a person can interpret the color red differently depending on the context, for example, the difference between a photo of blood compared to a sunset.

Wang said, "What's different about our work is we focused on shape, as an additional source of information. More specifically, [our research] is looking at features like roundness, angularity and complexity."

In their research, the team exploited the fact that images without sharp angles tend to inspire more positive feelings. Alternatively, sharp angles tend to produce negative feelings.

"We used a computer to code these shape-related features and map them with the data we've collected regarding people's emotional feelings," Wang said. The result is a now-patented computational approach that can use insights into shape, including roundness and angularity, to more accurately predict a person's response to an image.

"We were able to build a machine that could look at a scene and predict what kind of emotion a human would have," Wang explained. "You can think of it like computers can understand emotions. Once they've mined large amounts of data, they can predict what people feel. To me, that is a revolution."

When starting this project, Wang knew computational coding of emotions would be difficult. He said, "Humans can look at a scene and immediately experience the emotion. A computer may not."

"One of the challenges [in this project] is that emotion itself is not a well-understood topic," Wang acknowledged. That's where Adams, a psychology researcher who studies emotional experience and perception, lent his expertise.

"My initial role was simply to share basic insights from the world of emotion and social theory," Adams said.

The research partnership was encouraged by the growing eagerness of the National Science Foundation (NSF) to bring different fields together for collaboration. Their partnership started when the director of the Social Science Research Institute, Susan McHale, helped connect this team to collaborate on an NSF grant focused on social-computational systems. Adams said, "The fact that they brought us all together to have this collaboration informs all of our work in a way that otherwise would not have been possible."

Their collaboration deepened as they applied a dimensional approach to emotion—not coding emotions as a dichotomy, like simply happy or sad, but as a multi-faced approach. When rating the response to an image, participants were asked on the valence (type of emotion invoked) and the arousal level (how strongly the emotion was felt). Adams said, "The dimensional approach to emotion served as a natural theoretical fit to the data driven and statistical approaches used by computer vision."

Wang noted that the questions they are trying tackle, "represents a holy grail of visual computing—trying to get well beyond pixel-level or even object-level understanding." Adams added that "the contribution of having emotion theory help guide vision has great benefit for both disciplines."

With the patent awarded, Wang and Adams believe there could be significant technological advancements from their research. Adams said, "I could imagine this type of technology being used by a company like Google. It would predict what emotion an image would evoke." Wang suggested that personal assistant robots might be taught to respond more like a human when they are able to absorb the same emotions their handler experiences through what they see.

Wang said, adding, "There is much more to be done. But in ten years, I think we will see computers understanding emotions in day-to-day products."

Explore further: Emotions are cognitive, not innate, researchers conclude

Related Stories

Emotions are cognitive, not innate, researchers conclude

February 15, 2017

Emotions are not innately programmed into our brains, but, in fact, are cognitive states resulting from the gathering of information, New York University Professor Joseph LeDoux and Richard Brown, a professor at the City ...

Online system rates images by aesthetic quality

May 5, 2009

(PhysOrg.com) -- An online photo-rating system developed at Penn State is the first publicly available tool for automatically determining the aesthetic value of an image, according to a Penn State researcher involved with ...

Intelligent Computers See Your Human Traits

May 29, 2008

Today’s computers can do a lot as far as computation goes, but they tend to do it in an impersonal, stand-offish way, so to speak. However, computer engineers are busy changing that, as they try to give computers a bit ...

Sad music moves those who are empathetic

September 16, 2016

(HealthDay)—Many people find sad music relaxing but to those who are very empathetic, exposure to melancholic melodies is an intense but positive and deeply moving experience, according to a new study.

Recommended for you

Volumetric 3-D printing builds on need for speed

December 11, 2017

While additive manufacturing (AM), commonly known as 3-D printing, is enabling engineers and scientists to build parts in configurations and designs never before possible, the impact of the technology has been limited by ...

Tech titans ramp up tools to win over children

December 10, 2017

From smartphone messaging tailored for tikes to computers for classrooms, technology titans are weaving their way into childhoods to form lifelong bonds, raising hackles of advocacy groups.

Mapping out a biorobotic future  

December 8, 2017

You might not think a research area as detailed, technically advanced and futuristic as building robots with living materials would need help getting organized, but that's precisely what Vickie Webster-Wood and a team from ...

Lyft puts driverless cars to work in Boston

December 6, 2017

Lyft on Wednesday began rolling out self-driving cars with users of the smartphone-summoned ride service in Boston in a project with technology partner nuTonomy.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.