Why we should welcome 'killer robots', not ban them

The open letter signed by more than 12,000 prominent people calling for a ban on artificially intelligent killer robots, connected to arguments for a UN ban on the same, is misguided and perhaps even reckless.

Wait, misguided? Reckless? Let me offer some context. I am a robotics researcher and have spent much of my career reading and writing about military robots, fuelling the very scare campaign that I now vehemently oppose.

I was even one of the hundreds of people who, in the early days of the debate, gave their support to the International Committee for Robot Arms Control (ICRAC) and the Campaign to Stop Killer Robots.

But I've changed my mind.

Why the radical change in opinion? In short, I came to realise the following.

The human connection

The signatories are just scaremongers who are trying to ban autonomous weapons that "select and engage targets without human intervention", which they say will be coming to a battlefield near you within "years, not decades".

But, when you think about it critically, no robot can really kill without human intervention. Yes, robots are probably already capable of killing people using sophisticated mechanisms that resemble those used by humans, meaning that humans don't necessarily need to oversee a lethal system while it is in use. But that doesn't mean that there is no human in the loop.

We can model the brain, human learning and decision making to the point that these systems seem capable of generating creative solutions to killing people, but humans are very much involved in this process.

Indeed, it would be preposterous to overlook the role of programmers, cognitive scientists, engineers and others involved in building these autonomous systems. And even if we did, what of the commander, military force and government that made the decision to use the system? Should we overlook them, too?

We already have automatic killing machines

We already have weapons of the kind for which a ban is sought.

The Australian Navy, for instance, has successfully deployed highly automated weapons in the form of close-in weapons systems (CIWS) for many years. These systems are essentially guns that can fire thousands of rounds of ammunition per minute, either autonomously via a computer-controlled system or under manual control, and are designed to provide surface vessels with a last defence against anti-ship missiles.

When engaged autonomously, CIWSs perform functions normally performed by other systems and people, including search, detection, threat assessment, acquisition, targeting and target destruction.

This system would fall under the definition provided in the open letter if we were to follow the signatories' logic. But you don't hear of anyone objecting to these systems. Why? Because they're employed far out at sea and only in cases where an object is approaching in a hostile fashion, usually descending in the direction of the ship at rapid speed.

That is, they're employed only in environments and contexts whereby the risk of killing an innocent civilian is virtually nil, much less than in regular combat.

So why can't we focus on existing laws, which stipulate that they be used in the most particular and narrow circumstances?

The real fear is of non-existent thinking robots

It seems that the real worry that has motivated many of the 12,000-plus individuals to sign the anti-killer-robot petition is not about machines that select and engage targets without , but rather the development of sentient robots.

Given the advances in technology over the past century, it is tempting to fear thinking robots. We did leap from the first powered flight to space flight in less than 70 years, so why can't we create a truly intelligent robot (or just one that's too autonomous to hold a human responsible but not autonomous enough to hold the robot itself responsible) if we have a bit more time?

There are a number of good reasons why this will never happen. One explanation might be that we have a soul that simply can't be replicated by a machine. While this tends to be the favourite of spiritual types, there are other natural explanations. For instance, there is a logical argument to suggest that certain brain processes are not computational or algorithmic in nature and thus impossible to truly replicate.

Once people understand that any system we can conceive of today – whether or not it is capable of learning or highly complex operation – is the product of programming and artificial intelligence programs that trace back to its programmers and system designers, and that we'll never have genuine thinking robots, it should become clear that the argument for a total ban on rests on shaky ground.

Who plays by the rules?

UN bans are also virtually useless. Just ask anyone who's lost a leg to a recently laid anti-personnel mine. The sad fact of the matter is that "bad guys" don't play by the rules.

Now that you understand why I changed my mind, I invite the signatories to the killer robot petition to note these points, reconsider their position and join me on the "dark side" in arguing for more effective and practical regulation of what are really just highly automated systems.


Explore further

No sci-fi joke: 'killer robots' strike fear into tech leaders

This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).
The Conversation

Citation: Why we should welcome 'killer robots', not ban them (2015, July 30) retrieved 24 August 2019 from https://phys.org/news/2015-07-killer-robots.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
20 shares

Feedback to editors

User comments

RMQ
Jul 30, 2015
"We should welcome killer robots"

And we should also welcome nuclear weapons, missiles, submarines, etc. Just add some cute word play and there you go.

It reminds me that humans are the only living creatures that try to convince themselves that sharks, bears and.tigers are not to be feared.... lol

Jul 30, 2015
This guy needs to get a view of reality. Every development in killing was supposed to make war unthinkable. Did it? Did the machine gun make war unthinkable? Flame-throwers, to burn folk alive? Civilian bombing? Nuclear Weapons?

Any excuse will do for those who NEED to prove something to themselves, . . usually with others taking the chances.

Jul 30, 2015
And we should also welcome nuclear weapons, missiles, submarines, etc. Just add some cute word play and there you go
You are naive. Wars are unavoidable given the existence of religion-dominated cultures which force population growth to the point of instability.

The only way to prevail against superior numbers on the battlefield is with superior technologies capable of killing far more of them than they can kill of us.

AI offers ways of imbuing machines with the most selective, and thus the most moral, ways of killing the enemy. Machines are never terrified, furious, confused, in pain, starving, etc. and are thus superior to any human soldier for making decisions in the heat of battle.

Jul 30, 2015
I look forward to the tomb of the unknown killer robot.

Jul 30, 2015
"AI offers ways of imbuing machines with the most selective, and thus the most moral, ways of killing the enemy."
------------------------------------------

Killing: Celebrated by those who are personally insecure, or ignorant of the outcomes. Those who had been in wars before usually learn more mature and intelligent ways of solving problems.

Jul 30, 2015
Lying and posturing: celebrated by psychopaths who seek to better their lot by taking advantage of others.

Those who have had to deal with their foul, self-centered, and destructive ways in the past are obligated to expose them wherever they turn up, and to warn potential victims.

This is the only mature and intelligent and responsible way of dealing with psychopaths.

Jul 30, 2015
check in every day, or some stupid some such arbitrary time period, and validate and calibrate the killing that that machine has done that day, and compare it to computerized models of ideal collateral / innocent / bystander / extrajudicial murder. At least do that
-And you do realize that this sort of tracking, evaluation, and refinement is only possible with machine soldiers?

There is no way to do this with humans.

In fact the desire to improve their performance will make this sort of feedback inevitable.

Jul 30, 2015
Some folk have never grown out of quantification of killing. It really saved us in Vietnam, didn't it? Body counts?

Those who have never seen those bodies have no idea of what they speak.

Jul 30, 2015
So the reason we should embrace killing machines is because people who oppose them are just "scaremongers", Australia already has them , and the bad guys don't play by rules so we shouldn't either? Very convincing argument...

Jul 30, 2015
chain of accountability gets longer. Let the distance between Predator pilot
Not true. AI will be able to record exactly what it does and why it does it, in real time, available for analysis, and so would be directly accountable.

Watch this trailer for american sniper and imagine if it was an AI involved.
https://www.youtu...3u9ay1gs

AI could read the mans lips, AI could call upon resources such as facial recognition. AI could instantly recognize the grenade. AI could shoot it out of the womans hands before she had the chance to hand it off to the kid. AI could wound rather than kill due to improved accuracy and uninterrupted concentration.

Or it could choose to do nothing based on lack of info.

And AI would record all data input; multispectrum visuals, audio, etc for later analysis.

Having this control without the distraction and indecision that humans inevitably suffer from makes this potential MORE ethical, not less.

Jul 30, 2015
Some folk have never grown out of quantification of killing
Some people have never grown out of the juvenile tendency of using big words they dont understand to try to sound more intellectual.

The mark of a phony.

Your 'quantification of killing' makes no sense, much like most of what you post.

Jul 30, 2015


AI could read the mans lips, AI could call upon resources such as facial recognition. AI could instantly recognize the grenade. AI could shoot it out of the womans hands before she had the chance to hand it off to the kid. AI could wound rather than kill due to improved accuracy and uninterrupted concentration.



AI would be cheap (it will be a computer chip and software) and all of those wonderful things it could do to "bad guys" it could do to "good guys". Thus amplifying the effect of the few mad terrorists and mass murderers. As long as you can guarantee it will remain so expensive that only wealthy nation states can implement it, go bonkers. Otherwise, the idea is just bonkers.

Jul 30, 2015
Apprehensible - reprehensible? Prehensile? Apprehendable?

I always look words up first.

RMQ
Jul 31, 2015
The whole excuse behind the creation of nuclear weapons was that they would.end all wars.

Did they? Indeed, they caused psychological depression, starting with the physicists that created them, like Richard Feynman himself.

Technology does not create peace, or happiness, weapons are even worse.

Aug 02, 2015
There are a number of good reasons why this will never happen. One explanation might be that we have a soul that simply can't be replicated by a machine. While this tends to be the favourite of spiritual types, there are other natural explanations. For instance, there is a logical argument to suggest that certain brain processes are not computational or algorithmic in nature and thus impossible to truly replicate.


And this is where you lose all your credibility. "not computational or algorithmic in nature" - Oh yes, and what are they then? Magic? Any physical system has rules, and once those rules are understood the system can be replicated in another medium. Furthermore, you don't have to understand how the system works on a macro level - as long as you can accurately model the neurons and their connections, eventually someone will build a computer model that exactly models a human brain.

How people like this survive in academia is beyond me, it truly is.

Aug 02, 2015
"Furthermore, you don't have to understand how the system works on a macro level - as long as you can accurately model the neurons and their connections, eventually someone will build a computer model that exactly models a human brain."
--------------------------------

Well, yes and no. Duplicating the connections and wiring will not give you an operating Human brain. We are really controlled by the hormones, which excite us, put us to sleep, and otherwise do the things we ascribe to "free will".

Aug 03, 2015
Humanity has already had a close encounter with the future of killer robots, in the form of a seemingly innocent automaton sent to the United States from Canada. Meet HitchBOT, a robot that simply wants a ride from you...or does it? HitchBOT contains sophisticated chat-bot software that can engage you and potentially identify you as a target...especially if you're from Philadelphia. Luckily, this time some alert citizens of that city recognized the threat and neutralized the robot before it could kill them.

This may be the first robot sent to kill citizens of Philadelphia, but it certainly won't be the last. If you see a HitchBOT, don't talk to it. Whatever you do, don't tell it you're from Philadelphia.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more