Robots fighting wars could be blamed for mistakes on the battlefield

Apr 23, 2012
This images shows someone arguing with Robovie over the robot's mistake while playing a game. Credit: Human Interaction With Nature and Technological Systems at the University of Washington

As militaries develop autonomous robotic warriors to replace humans on the battlefield, new ethical questions emerge. If a robot in combat has a hardware malfunction or programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it?

Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But University of Washington are finding that people don't have such a clear-cut view of .

The researchers' latest results show that humans apply a moderate amount of morality and other human characteristics to robots that are equipped with social capabilities and are capable of harming humans. In this case, the harm was financial, not life-threatening. But it still demonstrated how humans react to robot errors.

The findings imply that as robots become more sophisticated and humanlike, the public may hold them morally accountable for causing harm.

"We're moving toward a world where robots will be capable of harming humans," said lead author Peter Kahn, a UW associate professor of psychology. "With this study we're asking whether a robotic entity is conceptualized as just a tool, or as some form of a technological being that can be held responsible for its actions."

The paper was recently published in the Proceedings of the International Conference on Human-Robot Interaction.

In the study, Kahn and his research team had 40 undergraduate students play a scavenger hunt with a humanlike robot, Robovie. The robot appeared autonomous, but it was remotely controlled by a researcher concealed in another room.

After a bit of small talk with the robot, each participant had two minutes to locate objects from a list of items in the room. They all found the minimum, seven, to claim the $20 prize. But when their time was up, Robovie claimed they had found only five objects.

Then came the crux of the experiment: participants' reactions to the robot's miscount.

"Most argued with Robovie," said co-author Heather Gary, a UW doctoral student in developmental psychology. "Some accused Robovie of lying or cheating."

(Watch a video of one of the participants disagreeing with Robovie: http://depts.washington.edu/hints/video1b.shtml.)

When interviewed, 65 percent of participants said Robovie was to blame – at least to a certain degree – for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize.

This suggests that as robots gain capabilities in language and social interactions, "it is likely that many people will hold a humanoid as partially accountable for a harm that it causes," the researchers wrote.

They argue that as militaries transform from human to robotic warfare, the chain of command that controls robots and the moral accountability of robotic warriors should be factored into jurisprudence and the Laws of Armed Conflict for cases when the robots hurt humans.

Kahn is also concerned about the of robotic warfare, period. "Using robotic warfare, such as drones, distances us from war, can numb us to human suffering, and make warfare more likely," he said.

Explore further: Flying robots will go where humans can't

More information: Pdf of the paper: depts.washington.edu/hints/pub… I_2012_corrected.pdf

add to favorites email to friend print save as pdf

Related Stories

Humanoids run world's first robot marathon race

Feb 26, 2011

Robovie-PC, a toy-sized humanoid, won the world's first full-length marathon for two-legged robots by a whisker Saturday, beating its closest rival by a single second after more than two days of racing. ...

Robots to help children to form relationships

May 29, 2007

A project which is using robots to help children with developmental or cognitive impairments to interact more effectively has just started at the University of Hertfordshire.

Thanks to RoboEarth the bots can learn on their own

Feb 10, 2011

(PhysOrg.com) -- We tend to assume that robots need human input in order to understand the world around them. In the near future humans may not even be a part of the robotic-learning equation. Soon, robots ...

Recommended for you

Flying robots will go where humans can't

Sep 17, 2014

There are many situations where it's impossible, complicated or too time-consuming for humans to enter and carry out operations. Think of contaminated areas following a nuclear accident, or the need to erect ...

Will tomorrow's robots move like snakes?

Sep 16, 2014

Over the last few years, researchers at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) have developed biologically inspired robots designed to fly like falcons, perch like pigeons, and swim ...

Robot Boris learning to load a dishwasher (w/ Video)

Sep 12, 2014

Researchers at the University of Birmingham in the U.K. have set themselves an ambitious goal: programming a robot in such a way as to allow it to collect dishes, cutlery, etc. from a dinner table, and put ...

Deep-sea diver hand offers freedom and feedback

Sep 12, 2014

Bodyskins and goggles are hardly the solution for divers who need to reach extreme depths. The Atmospheric Dive Suit (ADS) gives them the protection they need. Recently, The Economist detailed a technology ...

User comments : 6

Adjust slider to filter visible comments by rank

Display comments: newest first

BrettC
not rated yet Apr 23, 2012
Robots are just a tool. Arguably, a soldier is also just a tool to his/her government. As for Morality of Warfare, that sound like an Oxymoron. The person/people that decided to put the Robot, or Soldier, in the position of making that error, is responsible. When a Soldier makes an error. The consequences are usually more broad than just blaming the Soldier. It's no different than if a missile misses it's target. When Robot's become more sentient, the situation becomes more complex. The Robot will have to be dealt with like a soldier rather than a piece of failed equipment. That process is already in place.
BrettC
not rated yet Apr 23, 2012
For that matter. Creating a sentient being for the purpose of sending it to War is not a Moral act. It would be like having children for the sole purpose of creating soldiers.
OldBlackCrow
not rated yet Apr 25, 2012
come on guys... you're missing the point! If robots are making mistakes on the battle field, we can blame THEM. We now have a new scapegoat for major disasters. 40,000 innocent civilians killed? It was the robots fault.

:-/
Tewk
1 / 5 (1) May 15, 2012
Creating a sentient robot race would be humanities ending probably...unless the robots develop "compassion" or a desire to build zoos to observe primitive lifeforms.
Anyhow I've worked out the robot ethics problem years ago...if robots are used against my enemy, it's ethical. If they are used against me (or friends) it's unethical.
Terriva
1 / 5 (1) May 15, 2012
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
James Kingwood
not rated yet Jun 05, 2012
I find it interesting that a Robot would be viewed as any different on the field as a regular soldier. They are just tools and have to obey orders just like any other employee of the military.