Never mind killer robots – even the good ones are scarily unpredictable

August 25, 2017 by Taha Yasseri, The Conversation
Who could have predicted it would end like this? Credit: Shutterstock

The heads of more than 100 of the world's top artificial intelligence companies are very alarmed about the development of "killer robots". In an open letter to the UN, these business leaders – including Tesla's Elon Musk and the founders of Google's DeepMind AI firm – warned that autonomous weapon technology could be misused by terrorists and despots or hacked to perform in undesirable ways.

But the real threat is much bigger – and not just from human misconduct but from the machines themselves. The research into complex systems shows how behaviour can emerge that is much more unpredictable than the sum of individual actions. On one level this means human societies can behave very differently to what you might expect just looking at individual behaviour. But it can also apply to technology. Even ecosystems of relatively simple AI programs – what we call stupid, good bots – can surprise us, and even when the individual bots are behaving well.

The individual elements that make up complex systems, such as economic markets or global weather, tend not to interact in a simple linear way. This make these systems very hard to model and understand. For example, even after many years of climatology, it's still impossible to make long-term weather predictions. These systems are often very sensitive to small changes and can experience explosive feedback loops. It is also very difficult to know the precise state of such a system at any one time. All these things make these systems intrinsically unpredictable.

All these principles apply to large groups of individuals acting in their own way, whether that's human societies or groups of AI bots. My colleagues and I recently studied one type of a complex system that featured good bots used to automatically edit Wikipedia articles. These different bots are designed and exploited by Wikipedia's trusted human editors and their underlying software is open-source and available for anyone to study. Individually, they all have a common goal of improving the encyclopaedia. Yet their collective behaviour turns out to be surprisingly inefficient.

These Wikipedia bots work based on well-established rules and conventions, but because the website doesn't have a central management system there is no effective coordination between the people running different bots. As a result, we found pairs of bots that have been undoing each other's edits for several years without anyone noticing. And of course, because these bots lack any cognition, they didn't notice it either.

The bots are designed to speed up the editing process. But slight differences in the design of the bots or between people who use them can lead to a massive waste of resources in an ongoing "edit war" that would have been resolved much quicker with human editors.

We also found that the bots behaved differently in different language editions of Wikipedia. The rules are more or less the same, the goals are identical, the technology is similar. But in German Wikipedia, the collaboration between bots is much more efficient and productive compared to, for example, Portuguese Wikipedia. This can only be explained by the differences between the human editors who run these bots in different environments.

Exponential confusion

Wikipedia bots have very little autonomy and the system already operates very differently to the goals of individual bots. But the Wikimedia Foundation is planning to use AI that will give more autonomy to the bots. That will likely lead to even more unexpected behaviour.

Another example is what can happen when two bots designed to speak to humans interact with each other. We're no longer surprised by the answers given by artificial personal assistants such as the iPhone's Siri. But put several of these kind of chatbots together and they can quickly start acting in surprising ways, arguing and even insulting each other.

The bigger the system becomes and the more autonomous each bot is, the more complex and hence unpredictable the future behaviour of the system will be. Wikipedia is an example of large number of relatively simple bots. The chatbots example is a small number of rather sophisticated and creative bots – in both cases unexpected conflicts emerged. The complexity and therefore unpredictability increases exponentially as you add more and more individuals to the system. So in a future system with a large number of very sophisticated robots, the unexpected behaviour could go beyond our imagination.

Self-driving madness

For example, self-driving cars promise exciting advances in the efficiency and safety of road travel. But we don't yet know what will happen once we have a large, wild system of fully autonomous vehicles. They may well behave very differently to a small set of individual cars in a controlled environment. And even more unexpected behaviour might occur when driverless cars "trained" by different humans in different environments start interacting with each another.

Humans can adapt to new rules and conventions relatively quickly but can still have trouble switching between systems. This can be way more difficult for artificial agents. If a "German-trained" car was driving in Italy, for example, we just don't know how it would deal with the written rules and unwritten cultural conventions being followed by the many other "Italian-trained" cars. Something as common as crossing an intersection could become lethally risky because we just wouldn't know if the cars would interact as they were supposed to or whether they would do something completely unpredictable.

Now think of the that Elon Musk and his colleagues are worried about. A single killer robot could be very dangerous in wrong hands. But what about an unpredictable system of killer robots? I don't even want to think about it.

Explore further: Computer bots are more like humans than you might think, having fights lasting years

Related Stories

Recommended for you

Coffee-based colloids for direct solar absorption

March 22, 2019

Solar energy is one of the most promising resources to help reduce fossil fuel consumption and mitigate greenhouse gas emissions to power a sustainable future. Devices presently in use to convert solar energy into thermal ...

EPA adviser is promoting harmful ideas, scientists say

March 22, 2019

The Trump administration's reliance on industry-funded environmental specialists is again coming under fire, this time by researchers who say that Louis Anthony "Tony" Cox Jr., who leads a key Environmental Protection Agency ...

11 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

zbark123
5 / 5 (1) Aug 25, 2017
When we think about sentient intelligence, which is just around the corner, we need to have a discussion about emotion and motivation. Most people make the probably erroneous conclusion that a sentient robot will not have emotions. On the contrary, the neuroscience literature suggests that emotions are controlled by primitive parts of the brain that represent simple "orders" to do something without a rational reason. Thus robots which perform automated tasks are in some ways already emotional (although not yet self-aware). Just think about how many human emotions are tied to "automatic" tasks that do no require necessarily a reason to do them (sex, eating, etc). If we are dumb enough to give sentient robots the orders to kill, we are essentially giving them (probably very strong) emotions to do so, which means they could develope a whole culture, rellgion, and philosophy around killing humans, just as human socieities have done with their primitive emotions.
Eikka
5 / 5 (1) Aug 25, 2017
put several of these kind of chatbots together and they can quickly start acting in surprising ways, arguing and even insulting each other.


Chatbots are merely responding with a statistically likely response out of a collected database of responses. They're just more elaborate versions of the original ELIZA program that parsed the user input to find cues and keywords, and then threw the same thing back to the user in the form of a question.

So it's hardly a surprise that two chatbots pitted against each other will first produce nonsensical responses and then throw back insults. It's a sort of playback of the average discussion they've had with human users. This is also how the internet trolls managed to get the Microsoft chatbot to repeat neo-nazisms.

People give AI way too much credit, mostly because they don't know how it works. It's the Turing Test Trap: a simple answering machine can fool most people to think they're conversing with an actual human.
TheGhostofOtto1923
1 / 5 (1) Aug 26, 2017
Most people make the probably erroneous conclusion that a sentient robot will not have emotions. On the contrary, the neuroscience literature suggests that emotions are controlled by primitive parts of the brain that represent simple "orders" to do something without a rational reason
You just watched the latest Alien movie didn't you? (pure crap BTW) What, do you think we would program drones to scream before they shot their missiles?

'Emotions' are what compel animals to act. Machines don't need compelling. They act as they're programmed to. And this is what makes killer robots far more humane than humans. Machines judgement is not affected by fear or pain or confusion. They can be expected to act exactly as programmed, repeatedly, without fail. And they can be constantly improved.

The only reason to think that humans are a more humane alternative than machines is if you assume that there is a god that might intervene. But he is only ever on one side of a conflict.
Eikka
not rated yet Aug 27, 2017
They can be expected to act exactly as programmed, repeatedly, without fail.


You can't expect to program an intelligent robot with special exceptions for every contingency - you simply run out of memory and the database of rules grows too large to search in a meaningful time. On the other end, it takes the age of the universe to program it, and you'd be running behind such a machine all the time because you couldn't trust it to behave if things went slightly differently than what you programmed it for. A bee flies in the camera and it shoots everybody in the room - oops?

The more intelligent the machine is made, the more it has to rely on heuristics and "rules of thumb" to make the program robust against what the programmer didn't think to include. It has to develop a more holistic understanding of reality than just "If sensors indicate A then do B". It needs context and meaning, emotion and motives.

That also makes the machine more prone to err in the way we do.
Eikka
not rated yet Aug 27, 2017
The difficulty was put into words by the guy who developed the beambots in the 90's, if I remember correctly; that when intelligence arises in nature, it's based on a process of filtering or interacting with the chaotic noise around the creature to produce meaningful behaviour.

It's going from complexity to simplicity - from all the myriad factors to some simple choice like whether to turn left or right. If you understand it correctly, it's like half your mind isn't even in your head, but in the interplay between what you are, and what your surroundings are doing.

Meanwhile conventional AI research is trying to go the other way: to make complexity from simplicity - starting from a "mind in vacuum" in the insulated confines of a computer program, and then trying to come up with rules that would allow it to navigate the chaos and complexity of the real world.

The real thing works because it has to work to exist, while the artifical thing doesn't because it has no reason to
Eikka
not rated yet Aug 27, 2017
A bee flies in the camera and it shoots everybody in the room - oops?


To elaborate: a bee flies in the camera of the robot, it "thinks" it's under attack and kills everybody.

A bee? Indoors? Very unlikely, you'd say. That's what the programmers said and didn't put the exception in. Now that the glitch is apparent though, they have to add "bees" in the checklist of things to watch for whenever the robot is inside a building, and sure enough the robot will check for the presence of bees every time because it's been programmed to. It has to, in case there actually is a bee that could cause problems. You never know.

So what else? A rabid badger? Okay, add "badgers" to checklist... etc. etc. until you have petabytes of checklists of all the things that can go wrong and what to do then, because when you got many robots interacting with many things, even one in a billion chances become inevitabilities. After all, people often win the lottery as well.
TheGhostofOtto1923
1 / 5 (1) Aug 27, 2017
You can't expect to program an intelligent robot with special exceptions for every contingency - you simply run out of memory and the database of rules grows too large to search in a meaningful time
-You mean like us? I think you're imagining that the problem is more complicated than it actually is, like your misperception of AI cars.

Machines can simply refrain from acting and risk their own destruction, unlike humans. The reduction in friendly fire incidents alone will make them more humane.
It needs context and meaning, emotion and motives
-There's that emotion thing again. Describe exactly what is emotion in a machine? Overclocking perhaps? What is the digital analog of adrenaline? What would rage and fury algorithms look like and why would they be useful?

How do we program machines to love think enemy?
xponen
not rated yet Aug 27, 2017
Here's DARPA's summary on AI development: https://www.youtu...1G3tSYpU
There is 3 milestone for AI development:
1) AI that is based on human knowledge
2) AI that is based on statistical reasoning
3) AI that is based on context

The 1st milestone is bots that solve a problem that a programmer understood well, such as Airlines route, GPS, or 'expert system' that advice doctors, or search engine.

The 2nd milestone is an AI that solve incomprehensible problem, eg: IBM's Watson can find answers to a quiz by reading million of text, or Deep Neural Network can learn how to identify objects from million of example. This allow AI to solve problem with infinite possibility such as board game "Go", or imitate human cognition such as recognising car, sign-board, or pedestrian.

The 3rd milestone is an AI that are aware of the relations between knowledge. This will lead to an AI that are able to transcript speeches while adopting the concept of context in its prediction.
xponen
not rated yet Aug 27, 2017
There is always this problem when we are discussing AI.... we often talked about the past, we didn't discuss what is the hurdle that a researcher of today is facing, and we didn't know what are they trying to solve.
Eikka
not rated yet Sep 05, 2017
Machines can simply refrain from acting and risk their own destruction, unlike humans.


Unless a human is depending on the machine to act to save them. A self-driving car, an airplane on autopilot, a police robot trying to stop a crime, can't just throw their hands up and self-destruct when they get confused. They -have- to act or they fail their purpose.

Who would bother to send out an asteroid-mining robot that is sure to fail if it runs into any trouble? The investors would want it to come up with solutions, not just "error, shutting down".

How do we program machines to love think enemy?


We can't. That's the point. True intelligence is not programmable. It's not an algorithm.

Eikka
not rated yet Sep 08, 2017
The problem with discussing about AI is that everyone's looking for the intelligence in the wrong place.

If I say "hello", and a record player voices out "hello", that is the same action but not for the same reasons, yet when we evaluate what is "artifically intelligent", we judge it by the effect rather than the cause.

Now here's the tricky bit: Even we, people, are not intelligent solely by ourselves - put a person in sensory deprivation and pretty soon they stop functioning.
Intelligence is how we come up with actions, and that depends on who we are, where we are, and what we interact with - and that's not something you can ever program.

If you program an algorithm and its environment in order to produce a system that generates a particular action in a particular situation, you're holding all the strings. The machine is only doing what you make it do, and that's not intelligent. What you want is a machine that comes up with all that on its own.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.