A recent study from the University of Georgia used three AI bots—one Asian, one Black and one white—to look at perceptions of competence and humanness. Credit: University of Georgia

Racial stereotypes were upended during a recent study that involved artificial intelligence. New research from the University of Georgia found that Black bots were considered more competent and more human than white or Asian bots used in the same study. This contrasts with past research on human-to-human interactions.

"We found that in the digital space, because Black AI is so unusual, stereotypes amplified in the opposite direction," said researcher Nicole Davis, a third-year doctoral student in UGA's Terry College of Business. "The Black bots were not just seen as competent, but really competent—more competent than the white or Asian bots."

Understanding how interact with is an important step toward the ethical use of AI, Davis said. As AI and bots grow in popularity, businesses must understand these nuances to not only support their bottom line, but also to navigate decreasing .

The study looked at perceptions of competence and humanness. These are in the digital space, Davis said, as they can influence how a customer feels and experiences a business.

To gauge preexisting opinions, participants were asked about common stereotypes for white, Black and Asian racial groups. Many responses aligned with past research: Black people were perceived as less competent than , and Asian people were seen as most competent, Davis said.

Afterward, participants were randomly assigned a bot that appeared white, Black or Asian and then negotiated with the bot to reduce the cost of a vacation rental. They could end negotiations at any time, and afterward, they were asked about the bot's competence, warmth and humanness.

"Regardless of how the participants identified or which bot they interacted with, we still found that Black people were generally perceived as less competent than white or Asian people," Davis said. "But then when we asked about the bot, we saw perceptions change. Even if they said, 'Yes, I feel like Black people are less competent,' they also said, 'Yes, I feel like the Black AI was more competent.'"

The reason for this difference, Davis said, is known as expectation violation theory. This theory proposes that if expectations are low—due to something like stereotypes—but the experience is positive, then people reflect on their experience as overwhelmingly positive.

"Participants think, 'Oh, wow, not only is there a Black bot in the digital space, but they actually did really well,'" Davis said. "Their expectations are exceeded, and their positive responses are amplified. And that's why we find that the Black bots are perceived to be higher in competence and more human than the white or Asian bots."

These findings are one indicator that stereotypes that are applied in human interactions may function differently in digital spaces, Davis said.

"This is very important as marketers design highly dynamic, anthropomorphic bots to serve as a front door to their firm or business," Davis said. "We need more research on AI to understand how it is impacting consumer perception, as well as when AI is going to help versus when it could hurt."

More information: Nicole Davis et al, I'm Only Human? The Role of Racial Stereotypes, Humanness, and Satisfaction in Transactions with Anthropomorphic Sales Bots, Journal of the Association for Consumer Research (2022). DOI: 10.1086/722703