Robots fighting wars could be blamed for mistakes on the battlefield

Apr 23, 2012
This images shows someone arguing with Robovie over the robot's mistake while playing a game. Credit: Human Interaction With Nature and Technological Systems at the University of Washington

As militaries develop autonomous robotic warriors to replace humans on the battlefield, new ethical questions emerge. If a robot in combat has a hardware malfunction or programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it?

Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But University of Washington are finding that people don't have such a clear-cut view of .

The researchers' latest results show that humans apply a moderate amount of morality and other human characteristics to robots that are equipped with social capabilities and are capable of harming humans. In this case, the harm was financial, not life-threatening. But it still demonstrated how humans react to robot errors.

The findings imply that as robots become more sophisticated and humanlike, the public may hold them morally accountable for causing harm.

"We're moving toward a world where robots will be capable of harming humans," said lead author Peter Kahn, a UW associate professor of psychology. "With this study we're asking whether a robotic entity is conceptualized as just a tool, or as some form of a technological being that can be held responsible for its actions."

The paper was recently published in the Proceedings of the International Conference on Human-Robot Interaction.

In the study, Kahn and his research team had 40 undergraduate students play a scavenger hunt with a humanlike robot, Robovie. The robot appeared autonomous, but it was remotely controlled by a researcher concealed in another room.

After a bit of small talk with the robot, each participant had two minutes to locate objects from a list of items in the room. They all found the minimum, seven, to claim the $20 prize. But when their time was up, Robovie claimed they had found only five objects.

Then came the crux of the experiment: participants' reactions to the robot's miscount.

"Most argued with Robovie," said co-author Heather Gary, a UW doctoral student in developmental psychology. "Some accused Robovie of lying or cheating."

(Watch a video of one of the participants disagreeing with Robovie: http://depts.washington.edu/hints/video1b.shtml.)

When interviewed, 65 percent of participants said Robovie was to blame – at least to a certain degree – for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize.

This suggests that as robots gain capabilities in language and social interactions, "it is likely that many people will hold a humanoid as partially accountable for a harm that it causes," the researchers wrote.

They argue that as militaries transform from human to robotic warfare, the chain of command that controls robots and the moral accountability of robotic warriors should be factored into jurisprudence and the Laws of Armed Conflict for cases when the robots hurt humans.

Kahn is also concerned about the of robotic warfare, period. "Using robotic warfare, such as drones, distances us from war, can numb us to human suffering, and make warfare more likely," he said.

Explore further: Co-robots team up with humans

More information: Pdf of the paper: depts.washington.edu/hints/pub… I_2012_corrected.pdf

add to favorites email to friend print save as pdf

Related Stories

Humanoids run world's first robot marathon race

Feb 26, 2011

Robovie-PC, a toy-sized humanoid, won the world's first full-length marathon for two-legged robots by a whisker Saturday, beating its closest rival by a single second after more than two days of racing. ...

Robots to help children to form relationships

May 29, 2007

A project which is using robots to help children with developmental or cognitive impairments to interact more effectively has just started at the University of Hertfordshire.

Thanks to RoboEarth the bots can learn on their own

Feb 10, 2011

(PhysOrg.com) -- We tend to assume that robots need human input in order to understand the world around them. In the near future humans may not even be a part of the robotic-learning equation. Soon, robots ...

Recommended for you

Audi to develop Tesla Model S all-electric rival

11 hours ago

The Tesla Model S has a rival. Audi is to develop all-electric family car. This is to be a family car that will offer an all-electric range of 280 miles (450 kilometers), according to Auto Express, which ...

A green data center with an autonomous power supply

17 hours ago

A new data center in the United States is generating electricity for its servers entirely from renewable sources, converting biogas from a sewage treatment plant into electricity and water. Siemens implemented ...

After a data breach, it's consumers left holding the bag

18 hours ago

Shoppers have launched into the holiday buying season and retailers are looking forward to year-end sales that make up almost 20% of their annual receipts. But as you check out at a store or click "purchase" on your online shopping cart ...

Can we create an energy efficient Internet?

18 hours ago

With the number of Internet connected devices rapidly increasing, researchers from Melbourne are starting a new research program to reduce energy consumption of such devices.

Brain inspired data engineering

19 hours ago

What if next-generation ICT systems could be based on the brain's structure and its cognitive and adaptive processes? A groundbreaking paradigm of brain-inspired intelligent ICT architectures is being born.

User comments : 6

Adjust slider to filter visible comments by rank

Display comments: newest first

BrettC
not rated yet Apr 23, 2012
Robots are just a tool. Arguably, a soldier is also just a tool to his/her government. As for Morality of Warfare, that sound like an Oxymoron. The person/people that decided to put the Robot, or Soldier, in the position of making that error, is responsible. When a Soldier makes an error. The consequences are usually more broad than just blaming the Soldier. It's no different than if a missile misses it's target. When Robot's become more sentient, the situation becomes more complex. The Robot will have to be dealt with like a soldier rather than a piece of failed equipment. That process is already in place.
BrettC
not rated yet Apr 23, 2012
For that matter. Creating a sentient being for the purpose of sending it to War is not a Moral act. It would be like having children for the sole purpose of creating soldiers.
OldBlackCrow
not rated yet Apr 25, 2012
come on guys... you're missing the point! If robots are making mistakes on the battle field, we can blame THEM. We now have a new scapegoat for major disasters. 40,000 innocent civilians killed? It was the robots fault.

:-/
Tewk
1 / 5 (1) May 15, 2012
Creating a sentient robot race would be humanities ending probably...unless the robots develop "compassion" or a desire to build zoos to observe primitive lifeforms.
Anyhow I've worked out the robot ethics problem years ago...if robots are used against my enemy, it's ethical. If they are used against me (or friends) it's unethical.
Terriva
1 / 5 (1) May 15, 2012
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
James Kingwood
not rated yet Jun 05, 2012
I find it interesting that a Robot would be viewed as any different on the field as a regular soldier. They are just tools and have to obey orders just like any other employee of the military.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.