AI has already been weaponised – and it shows why we should ban 'killer robots'

September 7, 2018 by Ingvild Bode, The Conversation
Credit: Oleg Yarko/Shutterstock

A dividing line is emerging in the debate over so-called killer robots. Many countries want to see new international law on autonomous weapon systems that can target and kill people without human intervention. But those countries already developing such weapons are instead trying to highlight their supposed benefits.

I witnessed this growing gulf at a recent UN meeting of more than 70 countries in Geneva, where those in favour of , including the US, Australia and South Korea, were more vocal than ever. At the meeting, the US claimed that such weapons could actually make it easier to follow international humanitarian law by making military action more precise.

Yet it's highly speculative to say that "killer robots" will ever be able to follow humanitarian law at all. And while politicians continue to argue about this, the spread of autonomy and artificial intelligence in existing military technology is already effectively setting undesirable standards for its role in the use of force.

A series of open letters by prominent researchers speaking out against weaponising artificial intelligence have helped bring the debate about autonomous military systems to public attention. The problem is that the debate is framed as if this technology is something from the future. In fact, the questions it raises are effectively already being addressed by existing systems.

Most air defence systems already have significant autonomy in the targeting process, and military aircraft have highly automated features. This means "robots" are already involved in identifying and engaging targets.

Meanwhile, another important question raised by current technology is missing from the ongoing discussion. Remotely operated drones are currently used by several countries' militaries to drop bombs on targets. But we know from incidents in Afghanistan and elsewhere that images aren't enough to clearly distinguish between civilians and combatants. We also know that current AI technology can contain significant bias that effects its decision making, often with harmful effects.

Humans still press the trigger, but for how long? Credit: Burlingham/Shutterstock

As future fully autonomous aircraft are likely to be used in similar ways to drones, they will probably follow the practices laid out by drones. Yet states using existing autonomous technologies are excluding them from the wider debate by referring to them as "semi-autonomous" or so-called "legacy systems". Again, this makes the issue of "killer robots" seem more futuristic than it really is. This also prevents the international community from taking a closer look at whether these systems are fundamentally appropriate under humanitarian law.

Several key principles of international humanitarian law require deliberate human judgements that machines are incapable of. For example, the legal definition of who is a civilian and who is a combatant isn't written in a way that could be programmed into AI, and machines lack the situational awareness and ability to infer things necessary to make this decision.

Invisible decision making

More profoundly, the more that targets are chosen and potentially attacked by machines, the less we know about how those decisions are made. Drones already rely heavily on intelligence data processed by "black box" algorithms that are very difficult to understand to choose their proposed targets. This makes it harder for the human operators who actually press the trigger to question target proposals.

As the UN continues to debate this issue, it's worth noting that most countries in favour of banning autonomous weapons are developing countries, which are typically less likely to attend international disarmament talks. So the fact that they are willing to speak out strongly against autonomous weapons makes their doing so all the more significant. Their history of experiencing interventions and invasions from richer, more powerful countries (such as some of the ones in favour of autonomous weapons) also reminds us that they are most at risk from this technology.

Given what we know about existing autonomous systems, we should be very concerned that "" will make breaches of humanitarian law more, not less, likely. This threat can only be prevented by negotiating new international law curbing their use.

Explore further: Experts assemble for UN-hosted meeting on 'killer robots'

Related Stories

Group: US, Russia block consensus at 'killer robots' meeting

September 3, 2018

A key opponent of high-tech, automated weapons known as "killer robots" is blaming countries like the U.S. and Russia for blocking consensus at a U.N.-backed conference, where most countries wanted to ensure that humans stay ...

Tech leaders sign global pledge against autonomous weapons

July 18, 2018

A who's who of CEOs, engineers and scientists from the technology industry—including Google DeepMind, the XPRIZE Foundation and Elon Musk—have signed a pledge to "neither participate in nor support the development, manufacture, ...

Recommended for you

EPA adviser is promoting harmful ideas, scientists say

March 22, 2019

The Trump administration's reliance on industry-funded environmental specialists is again coming under fire, this time by researchers who say that Louis Anthony "Tony" Cox Jr., who leads a key Environmental Protection Agency ...

Coffee-based colloids for direct solar absorption

March 22, 2019

Solar energy is one of the most promising resources to help reduce fossil fuel consumption and mitigate greenhouse gas emissions to power a sustainable future. Devices presently in use to convert solar energy into thermal ...


Adjust slider to filter visible comments by rank

Display comments: newest first

Mark Thomas
1 / 5 (1) Sep 07, 2018
Ban them.
5 / 5 (2) Sep 08, 2018
AI weapons are far more moral than their human alternatives. We can program them with all the decision-making skills of the best human soldier, and can constantly improve them from lessons learned.

And they are far more dependable than the human soldier who may be wounded, furious, terrified, starving, confused, and/or exhausted.

People who want to ban AI think this is another way to prevent war. But wars are inevitable as long as religion-fueled overpopulation is there to drive them.

Banning weapons never prevents them from being used in war. If we dont use AI first and best then there will always be an enemy who will. And there is no effective defense against an AI but another AI.
Mark Thomas
1 / 5 (1) Sep 09, 2018
We can program them with all the decision-making skills of the best human soldier, and can constantly improve them from lessons learned.

LOL! Your naïveté is mind-blowing.
5 / 5 (2) Sep 09, 2018
We can program them with all the decision-making skills of the best human soldier, and can constantly improve them from lessons learned.

LOL! Your naïveté is mind-blowing.
Yours comes from 90s apocalypse scifi movies. Grow up hippie.
Mark Thomas
1 / 5 (1) Sep 09, 2018
People who view the Terminator movies as cautionary tales are hippies? You don't know what you are talking about, as usual.
5 / 5 (2) Sep 09, 2018
Because they're uh, movies. Reality is something a little different.

You do know that AI is the hero in later movies? Your reality is just fashion.
Mark Thomas
1 / 5 (1) Sep 10, 2018
Reality is something a little different.

Even children understand many fictional movies are wildly dramatized to make them entertaining, but the basic idea of killer AI is far from mere science fiction. You need to read the article above, especially the part where it says, "AI has already been weaponised." Try Googling those Boston Dynamics videos and then try to convince me even land-based terminators are impossible.

Another huge thing you are missing is the Pandora's Box aspect. You forget the more we develop this technology, the more likely terrorists and other nutjobs will eventually get it too. We should focus on improving humanity's chances of survival, not every more deadly ways of killing each other.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.