Drones will soon decide who to kill

April 11, 2018 by Peter Lee, The Conversation
Algorithms will soon be able to decide who to target. Credit: US Air Force

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI). This is a big step forward. Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.

Once complete, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for wider society. There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process. At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

Existing lethal military drones like the MQ-9 Reaper are carefully controlled and piloted via satellite. If a pilot drops a bomb or fires a missile, a human sensor operator actively guides it onto the chosen target using a laser.

Ultimately, the crew has the final ethical, legal and operational responsibility for killing designated human targets. As one Reaper operator states: "I am very much of the mindset that I would allow an insurgent, however important a target, to get away rather than take a risky shot that might kill civilians."

Even with these drone killings, human emotions, judgements and ethics have always remained at the centre of war. The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.

An MQ-9 Reaper Pilot. Credit: US Air Force

And this actually points to one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided. The weakness in this argument is that you don't have to be responsible for killing to be traumatised by it. Intelligence specialists and other military personnel regularly analyse graphic footage from drone strikes. Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.

When I interviewed over 100 Reaper crew members for an upcoming book, every person I spoke to who conducted lethal drone strikes believed that, ultimately, it should be a human who pulls the final trigger. Take out the human and you also take out the humanity of the decision to kill.

Grave consequences

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings. But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are already becoming evident. Under current international humanitarian law, "dual-use" facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

An MQ-9 Reaper. Credit: US Air Force

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use. Companies like Google, its employees or its systems, could become liable to attack from an enemy state. For example, if Google's Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone "killing" business, as might every other civilian contributor to such lethal autonomous systems.

Ethically, there are even darker issues still. The whole point of the self-learning algorithms – programs that independently learn from whatever data they can collect – that technology uses is that they become better at whatever task they are given. If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed. In militarised machine learning, that means political, military and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.

Recent experiences of autonomous AI in society should serve as a warning. Uber's fatal experiment with a self-driving Tesla pretty much guarantees that there will be unintended autonomous drone deaths as computer bugs are ironed out.

If machines are left to decide who dies, especially on a grand scale, then what we are witnessing is extermination. Any government or military that unleashed such forces would violate whatever values it claimed to be defending. In comparison, a pilot wrestling with a "kill or no kill" decision becomes the last vestige of humanity in the often inhuman business of war.

Explore further: Drones are more than killing machines, but what happens when they become intelligent?

Related Stories

Expert discusses drones, warfare and the media

November 17, 2017

Drones have become a common part of warfare—but their use remains a subject of public contention. Lisa Parks, a professor in MIT's program in Comparative Media Studies/Writing and director of its Global Media Technologies ...

Drones may violate international law

May 24, 2013

(Phys.org) —As President Obama gives a speech on national security—including defending U.S. use of drones to combat terrorism—Leila Sadat, JD, international law expert and professor of law at Washington University in ...

France to arm military surveillance drones

September 5, 2017

France is set to arm drones that are currently used exclusively for surveillance and intelligence, a first for the French military, the defense minister said Tuesday.

Robot warriors pose ethical dilemna

May 27, 2014

With the increasing use of drones in military operations, it is perhaps only a matter of time before robots replace soldiers. Whether fully automated war is on the immediate horizon, one researcher says it's not too early ...

Pentagon successfully tests micro-drone swarm

January 10, 2017

The Pentagon may soon be unleashing a 21st-century version of locusts on its adversaries after officials on Monday said it had successfully tested a swarm of 103 micro-drones.

Recommended for you

Security gaps identified in internet protocol IPsec

August 15, 2018

In collaboration with colleagues from Opole University in Poland, researchers at Horst Görtz Institute for IT Security (HGI) at Ruhr-Universität Bochum (RUB) have demonstrated that the internet protocol IPsec is vulnerable ...

Researchers find flaw in WhatsApp

August 8, 2018

Researchers at Israeli cybersecurity firm said Wednesday they had found a flaw in WhatsApp that could allow hackers to modify and send fake messages in the popular social messaging app.

4 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

rrwillsj
3.3 / 5 (3) Apr 11, 2018
"The Buck Never Stops Here... That is what subordinates are for... To take all the blame!"

How do you determine if an armed machine is benign or hostile while it is approaching you?
How do you verify an armed machine is programmed correctly by fallible human beings?
That the hardware hasn't been deliberately spoofed during manufacture?
As you have no reliable means to pre-verify the coded intentions of an armed machine. Is your instinctive fear of harm, sufficient legal justification for destroying an encroaching, potential hazard?
The Law of Collateral Damage is: That anyone and everyone terrorized, wounded, maimed or killed, is afterwards to be listed as a combatant. All who are victimized are to be held solely responsible for presenting themselves as a target.

After all, the blood washes off and our hands are once again squeaky clean in the Land of the Smugly Self-Righteous and Home of the Entitled Narcissists.
Cusco
5 / 5 (1) Apr 11, 2018
"Uber's fatal experiment with a self-driving Tesla pretty much guarantees that" every effort to do AI on the cheap, as is typical of military low-bid contracts, is going to have catastrophic failures.
snoosebaum
1 / 5 (1) Apr 11, 2018
Gosh wills j,i thought u would be happy at the thought of killing off all those white supremist trump supporters . deep state has new weapons , be happy
rrwillsj
3 / 5 (2) Apr 12, 2018
Oh, puhleeze Mr. sb. You'll never find any of the altright fairytail comicbook heroes anywhere near real danger.

When trumpledstiltskinned boasted that he would have charged into a school shooting? He should of been allowed to complete his sentence "...but my bone spurs hold me back... I'm delicate!"

Oh, you can count on America's quisling brownshirts and copperhead nightriders to brutally attack unarmed citizenry, women and children.

However, they are very careful to avoid war zones. Not that the could pass the Mental Health or IQ tests required to enlist in the U.S. Military.

The altright fairytails are a sordid example of what Bill Mauldin called a 'garratrooper'. "Close enough to the front lines to get away with wearing wrinkled fatigues. And far enough away from the front lines that no one is shooting at them!"

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.