Robots fighting wars could be blamed for mistakes on the battlefield

April 23, 2012, University of Washington
This images shows someone arguing with Robovie over the robot's mistake while playing a game. Credit: Human Interaction With Nature and Technological Systems at the University of Washington

As militaries develop autonomous robotic warriors to replace humans on the battlefield, new ethical questions emerge. If a robot in combat has a hardware malfunction or programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it?

Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But University of Washington are finding that people don't have such a clear-cut view of .

The researchers' latest results show that humans apply a moderate amount of morality and other human characteristics to robots that are equipped with social capabilities and are capable of harming humans. In this case, the harm was financial, not life-threatening. But it still demonstrated how humans react to robot errors.

The findings imply that as robots become more sophisticated and humanlike, the public may hold them morally accountable for causing harm.

"We're moving toward a world where robots will be capable of harming humans," said lead author Peter Kahn, a UW associate professor of psychology. "With this study we're asking whether a robotic entity is conceptualized as just a tool, or as some form of a technological being that can be held responsible for its actions."

The paper was recently published in the Proceedings of the International Conference on Human-Robot Interaction.

In the study, Kahn and his research team had 40 undergraduate students play a scavenger hunt with a humanlike robot, Robovie. The robot appeared autonomous, but it was remotely controlled by a researcher concealed in another room.

After a bit of small talk with the robot, each participant had two minutes to locate objects from a list of items in the room. They all found the minimum, seven, to claim the $20 prize. But when their time was up, Robovie claimed they had found only five objects.

Then came the crux of the experiment: participants' reactions to the robot's miscount.

"Most argued with Robovie," said co-author Heather Gary, a UW doctoral student in developmental psychology. "Some accused Robovie of lying or cheating."

(Watch a video of one of the participants disagreeing with Robovie:

When interviewed, 65 percent of participants said Robovie was to blame – at least to a certain degree – for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize.

This suggests that as robots gain capabilities in language and social interactions, "it is likely that many people will hold a humanoid as partially accountable for a harm that it causes," the researchers wrote.

They argue that as militaries transform from human to robotic warfare, the chain of command that controls robots and the moral accountability of robotic warriors should be factored into jurisprudence and the Laws of Armed Conflict for cases when the robots hurt humans.

Kahn is also concerned about the of robotic warfare, period. "Using robotic warfare, such as drones, distances us from war, can numb us to human suffering, and make warfare more likely," he said.

Explore further: Children perceive humanoid robot as emotional, moral being

More information: Pdf of the paper: … I_2012_corrected.pdf

Related Stories

Children perceive humanoid robot as emotional, moral being

April 6, 2012

( -- Robot nannies could diminish child care worries for parents of young children. Equipped with alarms and monitoring capabilities to guard children from harm, a robot nanny would let parents leave youngsters ...

Humanoids run world's first robot marathon race

February 26, 2011

Robovie-PC, a toy-sized humanoid, won the world's first full-length marathon for two-legged robots by a whisker Saturday, beating its closest rival by a single second after more than two days of racing.

Robots to help children to form relationships

May 29, 2007

A project which is using robots to help children with developmental or cognitive impairments to interact more effectively has just started at the University of Hertfordshire.

Thanks to RoboEarth the bots can learn on their own

February 10, 2011

( -- We tend to assume that robots need human input in order to understand the world around them. In the near future humans may not even be a part of the robotic-learning equation. Soon, robots will be able to ...

Recommended for you

Researchers find tweeting in cities lower than expected

February 20, 2018

Studying data from Twitter, University of Illinois researchers found that less people tweet per capita from larger cities than in smaller ones, indicating an unexpected trend that has implications in understanding urban pace ...

Augmented reality takes 3-D printing to next level

February 20, 2018

Cornell researchers are taking 3-D printing and 3-D modeling to a new level by using augmented reality (AR) to allow designers to design in physical space while a robotic arm rapidly prints the work.


Adjust slider to filter visible comments by rank

Display comments: newest first

not rated yet Apr 23, 2012
Robots are just a tool. Arguably, a soldier is also just a tool to his/her government. As for Morality of Warfare, that sound like an Oxymoron. The person/people that decided to put the Robot, or Soldier, in the position of making that error, is responsible. When a Soldier makes an error. The consequences are usually more broad than just blaming the Soldier. It's no different than if a missile misses it's target. When Robot's become more sentient, the situation becomes more complex. The Robot will have to be dealt with like a soldier rather than a piece of failed equipment. That process is already in place.
not rated yet Apr 23, 2012
For that matter. Creating a sentient being for the purpose of sending it to War is not a Moral act. It would be like having children for the sole purpose of creating soldiers.
not rated yet Apr 25, 2012
come on guys... you're missing the point! If robots are making mistakes on the battle field, we can blame THEM. We now have a new scapegoat for major disasters. 40,000 innocent civilians killed? It was the robots fault.

1 / 5 (1) May 15, 2012
Creating a sentient robot race would be humanities ending probably...unless the robots develop "compassion" or a desire to build zoos to observe primitive lifeforms.
Anyhow I've worked out the robot ethics problem years ago...if robots are used against my enemy, it's ethical. If they are used against me (or friends) it's unethical.
1 / 5 (1) May 15, 2012
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
James Kingwood
not rated yet Jun 05, 2012
I find it interesting that a Robot would be viewed as any different on the field as a regular soldier. They are just tools and have to obey orders just like any other employee of the military.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.