Killer robot drones are like drugs: Regulate, but resist the urge to ban them

Killer robot drones are like drugs: Regulate, but resist the urge to ban them
Taranis in flight. Credit: BAE Systems

BAE Systems has revealed that it has successfully test-flown Taranis, its prototype Unmanned Aerial Vehicle.

The test has some people understandably hot under the collar. But while there is much to debate on the detail, the answer to the biggest question of all, whether or not we should ban drones, is unequivocal. We should not. Like effective but dangerous drugs, the answer is not to ban them. It's to subject their development to rigorous testing and regulation.

BAE's video footage shows a sleek boomerang-shaped blade cruising sedately over the Australian outback. Taranis is a stealth aircraft, designed to evade radar. It is pilotless, meaning it can manoeuvre in ways that would cause a human to black out if they were on board. And crucially, it's a step on the way to drones that can make autonomous targeting decisions. More bluntly, it's a step towards taking to the sky.

It's not difficult to see why the idea of killer robots causes alarm. Some worry that these machines won't be able to distinguish reliably between soldiers and civilians and will end up killing innocents. Others imagine Terminator-style wars between robots and people.

Philosophers get in on the act too, arguing that enabling machines to decide who to kill is a fundamental breach of the conditions of just war. For it is unclear who should be held responsible when things go wrong and a drone kills the wrong targets. It can't be the dumb robot. Nor can it be the soldier who sends it to battle, because he or she only decides whether to use it, not what it's going to do. It can't be the designers, because the whole point is that they have created a system able to make autonomous choices about what to target.

This is all smoke and mirrors. The anti-killer-robot campaigners are right when they say now is the time to debate whether this technology is forbidden fruit, better for all if left untouched. They are also right to worry whether killer robots will observe the laws of war. There is no question that killer robots should not be deployed unless they observe those laws with at least the same (sadly inconsistent) reliability as soldiers. But there is no mystery as to how we will achieve that reliability and with it resolve how to ascribe moral responsibility.

There is an analogy here with medicines. Their effects are generally predictable, but a risk of unpleasant side-effects remains. So we cautiously test new drugs during development and only then license them for prescription. When prescribed in accordance with the guidelines, we don't hold doctors, drug companies, or the drugs to account for any bad side-effects that might occur. Rather, the body which approves the medicine is responsible for ensuring overall beneficial outcomes.

So too with killer robots. What we need is a thorough regulatory process. This will test their capabilities and allow them to be deployed only when they reliably observe the laws of war.

This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).
The Conversation

Citation: Killer robot drones are like drugs: Regulate, but resist the urge to ban them (2014, February 14) retrieved 25 June 2024 from https://phys.org/news/2014-02-killer-robot-drones-drugs-resist.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Ban 'killer robots,' rights group urges

0 shares

Feedback to editors