Stop killer robots before it is too late, scientists tell Davos forum

January 22, 2016 by Michel Sailhan
The "Campaign to Stop Killer Robots" was launched in London in 2013
The "Campaign to Stop Killer Robots" was launched in London in 2013

The world must act quickly to avert a future in which autonomous robots with artificial intelligence roam the battlefields killing humans, scientists and arms experts warned at an elite gathering in the Swiss Alps.

Rules must be agreed to prevent the development of such weapons, they said at a January 19-23 meeting of billionaires, scientists and political leaders in the snow-covered ski resort of Davos.

Angela Kane, the German UN High Representative for Disarmament Affairs from 2012-2015, said the world had been slow to take pre-emptive measures to protect humanity from the lethal technology.

"It may be too late," she told a debate in Davos.

"There are many countries and many representatives in the international community that really do not understand what is involved. This development is something that is limited to a certain number of advanced countries," Kane said.

The deployment of would represent a dangerous new era in warfare, scientists said.

"We are not talking about drones, where a human pilot is controlling the drone," said Stuart Russell, professor of computer science at University of California, Berkeley.

"We are talking about autonomous weapons, which means that there is no one behind it. AI: weapons," he told a forum in Davos. "Very precisely, weapons that can locate and attack targets without human intervention."

Arnold Schwarzenegger's "Terminator" movies popularised the idea that AI and killer robots could lead to the end of humans

Robot chaos on battlefield

Russell said he did not foresee a day in which robots fight the wars for humans and at the end of the day one side says: "OK you won, so you can have all our women."

But some 1,000 science and technology chiefs including British physicist Stephen Hawking, said in an open letter last July that the development of weapons with a degree of autonomous decision-making capacity could be feasible within years, not decades.

They called for a ban on offensive autonomous weapons that are beyond meaningful human control, warning that the world risked sliding into an artificial intelligence arms race and raising alarm over the risks of such weapons falling into the hands of violent extremists.

British scientist Stephen Hawking signed an open letter in July 2015 warning against the development of weapons with a degree of autonomous decision-making capacity

"The question is can these machines follow the rules of war?" Russell said.

'Beyond comprehension'

How, for an example, could an autonomous weapon differentiate between civilians, soldiers, resistance fighters and rebels? How could it know that it should not kill a pilot who has ejected from a plane and is parachuting to the ground?

"I am against robots for ethical reasons but I do not believe ethical arguments will win the day. I believe strategic arguments will win the day," Russell said.

A sentry robot freezes a hypothetical intruder by pointing its machine gun during a 2006 test in Cheonan, South Korea
A sentry robot freezes a hypothetical intruder by pointing its machine gun during a 2006 test in Cheonan, South Korea

The United States had renounced because of the risk that one day they could deployed by "almost anybody", he said. "I hope this will happen with robots."

Alan Winfield, professor of electronic engineering at the University of the West of England, warned that removing humans from battlefield decision-making would have grave consequences.

"It means that humans are deprived from moral responsibility," Winfield said.

Moreover, the reaction of the robots may be hard to predict, he said: "When you put a robot in a chaotic environment, it behaves chaotically."

Roger Carr, chairman of the British aerospace and defence group BAE, agreed.

"If you remove ethics and judgement and morality from human endeavour whether it is in peace or war, you will take humanity to another level which is beyond our comprehension," Carr warned.

"You equally cannot put something into the field that, if it malfunctions, can be very destructive with no control mechanism from a human. That is why the umbilical link, man to machine, is not only to decide when to deploy the weapon but it is also the ability to stop the process. Both are equally important."

Explore further: Artificial intelligence future wows Davos elite

Related Stories

Artificial intelligence future wows Davos elite

January 22, 2015

From the robot that washes your clothes to the robot that marks homework: the future world of artificial intelligence wowed the Davos elite Thursday, but the rosy picture came with a warning.

Most people want fully autonomous weapons banned

November 10, 2015

Public opinion is against the use of autonomous weapons capable of identifying and destroying targets without human input, according to a new survey by researchers at the University of British Columbia.

Recommended for you

Inferring urban travel patterns from cellphone data

August 29, 2016

In making decisions about infrastructure development and resource allocation, city planners rely on models of how people move through their cities, on foot, in cars, and on public transportation. Those models are largely ...

How machine learning can help with voice disorders

August 29, 2016

There's no human instinct more basic than speech, and yet, for many people, talking can be taxing. 1 in 14 working-age Americans suffer from voice disorders that are often associated with abnormal vocal behaviors - some of ...

Apple issues update after cyber weapon captured

August 26, 2016

Apple iPhone owners on Friday were urged to install a quickly released security update after a sophisticated attack on an Emirati dissident exposed vulnerabilities targeted by cyber arms dealers.

8 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

megaburp45
not rated yet Jan 22, 2016
What's to prevent umbilical designed AI weapons from using said intelligence to figure out how to cut the cord? Safeguards to prevent that could potentially be over ridden as well.
megaburp45
not rated yet Jan 22, 2016
Of course I'm presuming the umbilical connection is a remote, wireless program or software for human commands.
megaburp45
not rated yet Jan 22, 2016
Designing any software program for AI human commands leaves it susceptible to both internal corruption and or outside interference from hacks and or the enemy rather then the creators, which they themselves may also become suspect.
Protoplasmix
not rated yet Jan 22, 2016
So somehow the AI would have to conclude that killing a human(s) is the "intelligent" thing to do under any circumstances. I have to wonder what kind of crazy-ass machine would kill a human being and also have the audacity to regard itself as intelligent.
kochevnik
not rated yet Jan 22, 2016
Designing any software program for AI human commands leaves it susceptible to both internal corruption and or outside interference from hacks and or the enemy rather then the creators, which they themselves may also become suspect.
Hacks could be beneficial. Cracks are probably from the opponent
rgw
not rated yet Jan 23, 2016
What's to prevent umbilical designed AI weapons from using said intelligence to figure out how to cut the cord? Safeguards to prevent that could potentially be over ridden as well.


Rational, motivated, killer robots, cool! If only Earth could have stopped the manufacture of oxygen, there would now be no rational, motivated, killer mammals.
FESTtheory
not rated yet Jan 24, 2016
BECOMING HUMAN / INTELLIGENCE : FINALLY SOLVED. NEW COMPREHENSIVE THEORY STARTS FROM THE END by establishing the working theory of functioning of the human brain, and assuming that the transfer of collective knowledge - from mother to incapable baby - is what created us, i.e. our (collective C+IQ) intelligence. This constant upgrading of knowledge was achieved through multiple self-projection – MSP. MSP may be the most easily understood as a feeling similar to that of the apparent movement which we have when we are in a train that stands while we are looking through the window at another train that is moving. https://evolutionofhumanintelligence.wordpress.com/
baudrunner
not rated yet Jan 24, 2016
How, for an example, could an autonomous weapon differentiate between civilians, soldiers, resistance fighters and rebels? How could it know that it should not kill a pilot who has ejected from a plane and is parachuting to the ground?
Simple. Everybody in the battle - AI's and humans - has an id chip. This guy comes from "one of those countries". In fact, AI would be a vastly superior alternative, because they won't be shooting any of our boys in the back accidentally. Too many of our personnel are being killed needlessly by friendly fire.
When you put a robot in a chaotic environment, it behaves chaotically.
Well, you know, better leave the robot building to us, then.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.