Why humans find faulty robots more likeable

August 4, 2017
One of the study participants interacting with the robot during the experiment. Credit: Center for Human-Computer Interaction

It has been argued that the ability of humans to recognize social signals is crucial to mastering social intelligence - but can robots learn to read human social cues and adapt or correct their own behavior accordingly?

In a recent study, researchers examined how people react to robots that exhibit faulty behavior compared to perfectly performing robots. The results, published in Frontiers in Robotics and AI, show that the participants took a significantly stronger liking to the faulty than the robot that interacted flawlessly.

"Our results show that decoding a human's can help the robot understand that there is an error and subsequently react accordingly," says corresponding author Nicole Mirnig, PhD candidate at the Center for Human-Computer Interaction, University of Salzburg, Austria.

Although social robotics is a rapidly advancing field, social robots are not yet at a technical level where they operate without making errors. Nevertheless, most studies in the field are based on the assumption of faultlessly performing robots. "Alternatives resulting from unforeseeable conditions that develop during an experiment are often not further regarded or simply excluded," says Nicole Mirnig. "It lies within the nature of thorough scientific research to pursue a strict code of conduct. However, we suppose that faulty instances of human-robot interaction are full with knowledge that can help us further improve the interactional quality in new dimensions. We think that because most research focuses on perfect interaction, many potentially crucial aspects are overlooked."

To examine the human interaction partners' social signals following a robot error, the research team purposefully programmed faulty behavior into a human-like NAO robot's routine and let the participants interact with it. They measured the robot's likability, anthropomorphism and perceived intelligence, and analyzed the users' reaction when the robot made a mistake. By means of video coding, the researchers could replicate their findings from earlier studies and show that humans respond to faulty robot behavior with social signals. Through interviews and user ratings, the research team found that somewhat surprisingly, erroneous robots were not perceived as significantly less intelligent or anthropomorphic compared to perfectly performing robots. Instead, although the humans recognized the faulty robot's mistakes, they actually rated it as more likeable than its perfectly performing counterpart.

"Our results showed that the participants liked the faulty robot significantly more than the flawless one. This finding confirms the Pratfall Effect, which states that people's attractiveness increases when they make a mistake," says Nicole Mirnig. "Specifically exploring erroneous instances of interaction could be useful to further refine the quality of human-robotic interaction. For example, a robot that understands that there is a problem in the interaction by correctly interpreting the user's social signals, could let the user know that it understands the problem and actively apply error recovery strategies."

These findings have exciting implications for the field of social robotics, since they emphasize the importance for robot creators to keep potential imperfections in mind when designing robots. As opposed to assuming that a robot will behave perfectly, embracing the flaws of social robot technology might pave way for the development of robots that make mistakes - and learn from them. This would also make the robots more likeable to humans. "Studying the sources of imperfect robot behavior will lead to more believable robot characters and more natural interaction," concludes Nicole Mirnig.

Explore further: Designing soft robots: Ethics-based guidelines for human-robot interactions

More information: Nicole Mirnig et al, To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot, Frontiers in Robotics and AI (2017). DOI: 10.3389/frobt.2017.00021

Related Stories

Empowering robots for ethical behavior

July 18, 2017

Scientists at the University of Hertfordshire in the UK have developed a concept called Empowerment to help robots to protect and serve humans, while keeping themselves safe.

Adding social touch to robotics

October 24, 2016

A squeeze in the arm, a pat on the shoulder, or a slap in the face – touch is an important part of the social interaction between people. Social touch, however, is a relatively unknown field when it comes to robots, even ...

Recommended for you

Volvo to supply Uber with self-driving cars (Update)

November 20, 2017

Swedish carmaker Volvo Cars said Monday it has signed an agreement to supply "tens of thousands" of self-driving cars to Uber, as the ride-sharing company battles a number of different controversies.

New method analyzes corn kernel characteristics

November 17, 2017

An ear of corn averages about 800 kernels. A traditional field method to estimate the number of kernels on the ear is to manually count the number of rows and multiply by the number of kernels in one length of the ear. With ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

Edenlegaia
not rated yet Aug 05, 2017
Hardly a surprise. If it calls for more interactions and a feel of necessary correction from people, robots can seem more "humanlike". Empathy and stuff.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.