Emotion-reading tech fails the racial bias test

Emotion-reading tech fails the racial bias test
Basketball players Darren Collision (left) and Gordon Hayward (right). Credit: basketball-reference.com

Facial recognition technology has progressed to point where it now interprets emotions in facial expressions. This type of analysis is increasingly used in daily life. For example, companies can use facial recognition software to help with hiring decisions. Other programs scan the faces in crowds to identify threats to public safety.

Unfortunately, this technology struggles to interpret the emotions of black . My new study, published last month, shows that analysis technology assigns more negative emotions to black men's faces than white men's faces.

This isn't the first time that facial programs have been shown to be biased. Google labeled black faces as gorillas. Cameras identified Asian faces as blinking. Facial recognition programs struggled to correctly identify gender for people with darker skin.

My work contributes to a growing call to better understand the hidden bias in artificial intelligence software.

Measuring bias

To examine the bias in the facial recognition systems that analyze people's emotions, I used a data set of 400 NBA player photos from the 2016 to 2017 season, because players are similar in their clothing, athleticism, age and gender. Also, since these are professional portraits, the players look at the camera in the picture.

I ran the images through two well-known types of emotional recognition software. Both assigned black players more negative emotional scores on average, no matter how much they smiled.

Emotion-reading tech fails the racial bias test
Credit: Chart: The Conversation, CC-BY-ND Source: SSRN (2018)

For example, consider the official NBA pictures of Darren Collison and Gordon Hayward. Both players are smiling, and, according to the facial recognition and analysis program Face++, Darren Collison and Gordon Hayward have similar smile scores – 48.7 and 48.1 out of 100, respectively.

However, Face++ rates Hayward's expression as 59.7 percent happy and 0.13 percent angry and Collison's expression as 39.2 percent happy and 27 percent angry. Collison is viewed as nearly as angry as he is happy and far angrier than Hayward – despite the facial recognition program itself recognizing that both players are smiling.

In contrast, Microsoft's Face API viewed both men as happy. Still, Collison is viewed as less happy than Hayward, with 98 and 93 percent happiness scores, respectively. Despite his smile, Collison is even scored with a small amount of contempt, whereas Hayward has none.

Across all the NBA pictures, the same pattern emerges. On average, Face++ rates black faces as twice as angry as white faces. Face API scores black faces as three times more contemptuous than white faces. After matching players based on their smiles, both facial analysis programs are still more likely to assign the negative emotions of anger or contempt to black faces.

Stereotyped by AI

My study shows that facial recognition programs exhibit two distinct types of bias.

Emotion-reading tech fails the racial bias test
Credit: Chart: The Conversation, CC-BY-ND Source: SSRN (2018)

First, black faces were consistently scored as angrier than white faces for every smile. Face++ showed this type of bias. Second, black faces were always scored as angrier if there was any ambiguity about their facial expression. Face API displayed this type of disparity. Even if black faces are partially smiling, my analysis showed that the systems assumed more as compared to their white counterparts with similar expressions. The average emotional scores were much closer across races, but there were still noticeable differences for black and white faces.

This observation aligns with other research, which suggests that black professionals must amplify positive emotions to receive parity in their workplace performance evaluations. Studies show that people perceive black men as more physically threatening than white men, even when they are the same size.

Some researchers argue that is more objective than humans. But my study suggests that facial recognition reflects the same biases that people have. Black men's are scored with emotions associated with threatening behaviors more often than white men, even when they are smiling. There is good reason to believe that the use of facial recognition could formalize preexisting stereotypes into algorithms, automatically embedding them into everyday life.

Until facial recognition assesses black and white faces similarly, black people may need to exaggerate their positive facial expressions – essentially smile more – to reduce ambiguity and potentially negative interpretations by the technology.

Although innovative, artificial intelligence can perpetrate and exacerbate existing power dynamics, leading to disparate impact across racial/ethnic groups. Some societal accountability is necessary to ensure fairness to all groups because facial recognition, like most artificial intelligence, is often invisible to the people most affected by its decisions.


Explore further

Multi-racial facial recognition system provides more accurate results, study says

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: Emotion-reading tech fails the racial bias test (2019, January 3) retrieved 26 April 2019 from https://phys.org/news/2019-01-emotion-reading-tech-racial-bias.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
10 shares

Feedback to editors

User comments

Jan 03, 2019
Did you ever consider that social factors play a role in how facial expressions are interpreted/perceived? Is it possible that on average people of color score higher for "negative" emotions because they do, in fact, have underlying feelings of fear, anger, anxiety, etc. due to the social circumstances they were brought up in? Perhaps this software is picking up on these subtle clues. Have studies been done that control for socio-economic background to see if similar results occur across racial lines? It may just be a fact that a smiling white person is in reality 5% happier than a smiling person of color, because, on average, a white person doesn't have to deal with systemic oppression on a daily basis like people of color do.

Jan 04, 2019
S, wanna bet how many of these researchers, data collectors or analysts are non-white?

This is similar to all the males dictating female behavior & lifestyle choices. "How dare they think they are as good as white males? (males not men)

Jan 04, 2019
@rrwillsj: I wasn't going down the racial bias path in data collection/researchers, more going towards the other influences in a person's facial expressions that might need to be controlled for to truly understand whether there is bias in facial recognition software. The way the data was presented highlights several assumptions that could impact results that need to be addressed before we can conclude, as the paper does, that bias does exist in this software. Maybe the smiling face of the average non-white person really does show more tension (for non-obvious personal, social or economic reasons) than that of the average white person.

Jan 04, 2019
S. I can agree with your last comment.

Now, how do we bell the cat of social conditioning? That precludes the innate prejudice of "Not-My-People, therefore I expect them to behave so & so..."

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more