People favour expressive, communicative robots over efficient and effective ones

August 19, 2016
BERT2, a humanoid robot assistant, hands something to a human. Credit: University of Bristol

Making an assistive robot partner expressive and communicative is likely to make it more satisfying to work with and lead to users trusting it more, even if it makes mistakes, a new study suggests.

But the research also shows that giving robots human-like traits could have a flip side - users may even lie to the robot in order to avoid hurting its feelings.

Researchers from University College London and the University of Bristol experimented with a humanoid assistive robot helping users to make an omelette. The robot was tasked with passing the eggs, salt and oil but dropped one of the polystyrene eggs in two of the conditions and then attempted to make amends.

The aim of the study was to investigate how a robot may recover a users' trust when it makes a mistake and how it can communicate its erroneous behaviour to somebody who is working with it, either at home or at work

The study suggests that a communicative, expressive robot is preferable for the majority of users to a more efficient, less error prone one, despite it taking 50 per cent longer to complete the task.

Users reacted well to an apology from the robot that was able to communicate, and were particularly receptive to its sad facial expression. The researchers say this is likely to have reassured them that it 'knew' it had made a mistake.

At the end of the interaction, the communicative robot was programmed to ask participants whether they would give it the job of kitchen assistant, but they could only answer yes or no and were unable to qualify their answers.

Some were reluctant to answer and most looked uncomfortable. One person was under the impression that the robot looked sad when he said 'no', when it had not been programmed to appear so. Another complained of emotional blackmail and a third went as far as to lie to the .

Adriana Hamacher, who conceived the study as part of her MSc in Human Computer Interaction at UCL, said: "We would suggest that, having seen it display human-like emotion when the egg dropped, many participants were now pre-conditioned to expect a similar reaction and therefore hesitated to say no; they were mindful of the possibility of a display of further human-like distress..

"Human-like attributes, such as regret, can be powerful tools in negating dissatisfaction but we must identify with care which specific traits we want to focus on and replicate. If there are no ground rules then we may end up with robots with different personalities, just like the people designing them."

Professor Kerstin Eder, who leads the Verification and Validation for Safety in Robots research theme at the Bristol Robotics Laboratory and co-supervised Adriana's project, said: "Trust in our counterparts is fundamental for successful interaction. Adriana's study gives key insights into how communication and emotional expressions from robots can mitigate the impact of unexpected behaviour in collaborative robotics. Complementing thorough verification and validation with sound understanding of these human factors will help engineers design robotic assistants that people can trust."

Adriana's project was aligned with the EPSRC funded project Trustworthy Robotic Assistants, where new verification and validation techniques are being developed to ensure safety and trustworthiness of the machines that will enhance our quality of life in the future.

The research will be presented at the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), taking place from 26 to 31 August in New York City.

Explore further: Robot companions are coming into our homes – so how human should they be?

More information: A pre-publication copy is available at: arxiv.org/pdf/1605.08817.pdf

Related Stories

Touching a robot can elicit physiological arousal in humans

April 5, 2016

On the scale of the "uncanny valley," the humanoid robot registers a positive response with humans just before the dip into repulsion. Its resemblance hovers between C-3PO and Wall-E, a familiar but distinctly non-human robot. ...

Making robots more trustworthy

July 3, 2013

Researchers from the University of Hertfordshire are part of a new £1.2 million project that aims to ensure that future robotic systems can be trusted by humans.

Robots come to each other's aid when they get the signal

June 27, 2016

Sometimes all it takes to get help from someone is to wave at them, or point. Now the same is true for robots. Researchers at KTH Royal Institute of Technology in Sweden have completed work on an EU project aimed at enabling ...

Recommended for you

Volvo to supply Uber with self-driving cars (Update)

November 20, 2017

Swedish carmaker Volvo Cars said Monday it has signed an agreement to supply "tens of thousands" of self-driving cars to Uber, as the ride-sharing company battles a number of different controversies.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.