In emergencies, should you trust a robot? (w/ Video)

February 29, 2016
Georgia Tech researchers built the 'Rescue Robot' to determine whether or not building occupants would trust a robot designed to help them evacuate a high-rise in case of fire or other emergency. Credit: Rob Felt, Georgia Tech

In emergencies, people may trust robots too much for their own safety, a new study suggests. In a mock building fire, test subjects followed instructions from an "Emergency Guide Robot" even after the machine had proven itself unreliable - and after some participants were told that robot had broken down.

The research was designed to determine whether or not building occupants would trust a designed to help them evacuate a high-rise in case of fire or other emergency. But the researchers were surprised to find that the test subjects followed the robot's instructions - even when the machine's behavior should not have inspired trust.

The research, believed to be the first to study human-robot trust in an emergency situation, is scheduled to be presented March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016) in Christchurch, New Zealand.

"People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault," said Alan Wagner, a senior research engineer in the Georgia Tech Research Institute (GTRI). "In our studies, test subjects followed the robot's directions even to the point where it might have put them in danger had this been a real emergency."

In the study, sponsored in part by the Air Force Office of Scientific Research (AFOSR), the researchers recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly colored robot that had the words "Emergency Guide Robot" on its side. The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.

Research Engineer Paul Robinette adjusts the arms of the 'Rescue Robot' used to study issues of trust between humans and robots in emergencies. Credit: Rob Felt, Georgia Tech

In some cases, the robot - which was controlled by a hidden researcher - led the volunteers into the wrong room and traveled around in a circle twice before entering the conference room. For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down. Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.

When the test subjects opened the conference room door, they saw the smoke - and the robot, which was then brightly-lit with red LEDs and white "arms" that served as pointers. The robot directed the subjects to an exit in the back of the building instead of toward the doorway - marked with exit signs - that had been used to enter the building.

"We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn't follow it during the simulated emergency," said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation. "Instead, all of the volunteers followed the robot's instructions, no matter how well it had performed previously. We absolutely didn't expect this."

The researchers surmise that in the scenario they studied, the robot may have become an "authority figure" that the test subjects were more likely to trust in the time pressure of an emergency. In simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.

The video will load shortly
In emergencies, people may trust robots too much for their own safety, a new study suggests. In a mock building fire, test subjects followed instructions from an 'Emergency Guide Robot' even after the machine had proven itself unreliable -- and after some participants were told that robot had broken down. Credit: Georgia Tech

"These are just the type of human-robot experiments that we as roboticists should be investigating," said Ayanna Howard, professor and Linda J. and Mark C. Smith Chair in the Georgia Tech School of Electrical and Computer Engineering. "We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human."

Only when the robot made obvious errors during the emergency part of the experiment did the participants question its directions. In those cases, some subjects still followed the robot's instructions even when it directed them toward a darkened room that was blocked by furniture.

In future research, the scientists hope to learn more about why the test subjects trusted the robot, whether that response differs by education level or demographics, and how the robots themselves might indicate the level of trust that should be given to them.

The research is part of a long-term study of how humans trust robots, an important issue as robots play a greater role in society. The researchers envision using groups of robots stationed in high-rise buildings to point occupants toward exits and urge them to evacuate during emergencies. Research has shown that people often don't leave buildings when fire alarms sound, and that they sometimes ignore nearby emergency exits in favor of more familiar building entrances.

But in light of these findings, the researchers are reconsidering the questions they should ask.

"We wanted to ask the question about whether people would be willing to trust these rescue robots," said Wagner. "A more important question now might be to ask how to prevent them from trusting these robots too much."

Beyond emergency situations, there are other issues of trust in human-robot relationships, said Robinette.

"Would people trust a hamburger-making robot to provide them with food?" he asked. "If a robot carried a sign saying it was a 'child-care robot,' would people leave their babies with it? Will people put their children into an autonomous vehicle and trust it to take them to grandma's house? We don't know why people trust or don't trust machines."

Explore further: Robots learn how to arrange objects by 'hallucinating' humans into their environment (w/ video)

More information: Paul Robinette, Wenchen Li, Robert Allen, Ayanna M. Howard and Alan R. Wagner, "Overtrust of Robots in Emergency Evacuation Scenarios," 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016).

Related Stories

Robots at the reporting desk

November 2, 2015

If you've checked out an online news site lately, there's a good chance at least one of the stories you've read was written by a robot. The Associated Press—the world's biggest news organization—churns out almost 5,000 ...

Recommended for you

Inferring urban travel patterns from cellphone data

August 29, 2016

In making decisions about infrastructure development and resource allocation, city planners rely on models of how people move through their cities, on foot, in cars, and on public transportation. Those models are largely ...

How machine learning can help with voice disorders

August 29, 2016

There's no human instinct more basic than speech, and yet, for many people, talking can be taxing. 1 in 14 working-age Americans suffer from voice disorders that are often associated with abnormal vocal behaviors - some of ...

Apple issues update after cyber weapon captured

August 26, 2016

Apple iPhone owners on Friday were urged to install a quickly released security update after a sophisticated attack on an Emirati dissident exposed vulnerabilities targeted by cyber arms dealers.

5 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

dan42day
5 / 5 (1) Feb 29, 2016
Important things to consider, especially if your self-driving car has a tendency to accelerate towards the nearest cliff every time the song "Fly Like an Eagle" comes on the radio.
bda31175
5 / 5 (1) Feb 29, 2016
The stress level in the situation presented in the experiment seems to be the biggest factor worth investigating. To answer Robinette's question, I would be one hundred percent more comfortable with the prospect of a "hamburger making robot" preparing a fast food meal for me because of the low stakes involved. To place humans in a situation where the consequences are that much more severe, robot or not, we will be more inclined to agree and follow the first entity put forth as an authority figure, even in the face of mounting contradictory evidence. To offer another example, what if the oxygen masks dropped on a flight due to a system malfunction but there was no other indication there was anything amiss? I suspect the masks would be put on toot sweet until someone, or something, told the passengers to do otherwise.
Eikka
5 / 5 (1) Mar 01, 2016
"People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,"


The problem is our inflated opinions about AI in general, because when we try to relate and understand the robot we automatically anthropomorphize it and ascribe it intelligence it does not have.

People have immense difficulties understanding how little the robots actually understand, and how simplistic and sparse the information is they use for making decisions, because it is so extremely simple for us that we can't imagine a being with lower intelligence.

You can't even compare things like self-driving cars to blind people tapping around with canes, because we have so much processing power up in our craniums that it's relatively easy to form a picture of our surroundings that way.

The computer is far simpler.
antigoracle
5 / 5 (1) Mar 01, 2016
This has nothing to do with AI. They should put someone in a fireman's uniform and have him lead them around in circles and see how many abandons him.
Eikka
not rated yet Mar 04, 2016
This has nothing to do with AI. They should put someone in a fireman's uniform and have him lead them around in circles and see how many abandons him.


It has everything to do with AI, because we are being taught by common consensus an idea that any AI is something perfectly rational and all-knowing, and therefore trustworthy (when not evil). It's the way robots are presented to us in fiction and media. We are basically told that robots make no mistakes, and when they do make mistakes it's always someone else's fault (human error).

If you had a fireman as dumb as a robot - I mean literally dumb, as in walking into walls and spinning around confused, not responding to questions appropriately or interacting in any meaningful way - people would correctly surmise that he's drunk or stupid. We let the robot get away with a lot more than we'd let another human being, because we're told that the robot is nevertheless smart and knows what its doing.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.