Artificial intelligence cyber attacks are coming – but what does that mean?

August 28, 2017 by Jeremy Straub, The Conversation
Hackers will start to get help from robots and artificial intelligence soon. Credit: Jinning Li/Shutterstock.com

The next major cyberattack could involve artificial intelligence systems. It could even happen soon: At a recent cybersecurity conference, 62 industry professionals, out of the 100 questioned, said they thought the first AI-enhanced cyberattack could come in the next 12 months.

This doesn't mean robots will be marching down Main Street. Rather, artificial intelligence will make existing cyberattack efforts – things like identity theft, denial-of-service and password cracking – more powerful and more efficient. This is dangerous enough – this type of hacking can steal money, cause emotional harm and even injure or kill people. Larger attacks can cut power to hundreds of thousands of people, shut down hospitals and even affect national security.

As a scholar who has studied AI decision-making, I can tell you that interpreting human actions is still difficult for AI's and that humans don't really trust AI systems to make major decisions. So, unlike in the movies, the capabilities AI could bring to cyberattacks – and cyberdefense – are not likely to immediately involve computers choosing targets and attacking them on their own. People will still have to create attack AI systems, and launch them at particular targets. But nevertheless, adding AI to today's cybercrime and cybersecurity world will escalate what is already a rapidly changing between attackers and defenders.

Faster attacks

Beyond computers' lack of need for food and sleep – needs that limit human hackers' efforts, even when they work in teams – automation can make complex attacks much faster and more effective.

To date, the effects of automation have been limited. Very rudimentary AI-like capabilities have for decades given virus programs the ability to self-replicate, spreading from computer to computer without specific human instructions. In addition, programmers have used their skills to automate different elements of hacking efforts. Distributed attacks, for example, involve triggering a remote program on several computers or devices to overwhelm servers. The attack that shut down large sections of the internet in October 2016 used this type of approach. In some cases, common attacks are made available as a script that allows an unsophisticated user to choose a target and launch an attack against it.

AI, however, could help human cybercriminals customize attacks. Spearphishing attacks, for instance, require attackers to have personal information about prospective targets, details like where they bank or what medical insurance company they use. AI systems can help gather, organize and process large databases to connect identifying information, making this type of attack easier and faster to carry out. That reduced workload may drive thieves to launch lots of smaller attacks that go unnoticed for a long period of time – if detected at all – due to their more limited impact.

AI systems could even be used to pull information together from multiple sources to identify people who would be particularly vulnerable to attack. Someone who is hospitalized or in a nursing home, for example, might not notice money missing out of their account until long after the thief has gotten away.

Improved adaptation

AI-enabled attackers will also be much faster to react when they encounter resistance, or when cybersecurity experts fix weaknesses that had previously allowed entry by unauthorized users. The AI may be able to exploit another vulnerability, or start scanning for new ways into the system – without waiting for human instructions.

This could mean that human responders and defenders find themselves unable to keep up with the speed of incoming attacks. It may result in a programming and technological arms race, with defenders developing AI assistants to identify and protect against attacks – or perhaps even AI's with retaliatory attack capabilities.

Avoiding the dangers

Operating autonomously could lead AI systems to attack a system it shouldn't, or cause unexpected damage. For example, software started by an attacker intending only to steal money might decide to target a hospital computer in a way that causes human injury or death. The potential for unmanned aerial vehicles to operate autonomously has raised similar questions of the need for humans to make the decisions about targets.

The consequences and implications are significant, but most people won't notice a big change when the first AI attack is unleashed. For most of those affected, the outcome will be the same as human-triggered attacks. But as we continue to fill our homes, factories, offices and roads with internet-connected robotic systems, the potential effects of an attack by only grows.

Explore further: Why has healthcare become such a target for cyber-attackers?

Related Stories

Inside the fight against malware attacks

August 2, 2017

When malicious software attacks, computer scientists and security researchers want to know how the attackers got into what was supposed to be a secure system, and what they're actually doing that's causing problems for users. ...

Global ransomware attacks—the impact and the response

May 16, 2017

A global cyberattack unleashed Friday has reportedly affected more than 200,000 computers across more than 150 countries. The "ransomware," called "WannaCry," exploits a vulnerability in the Windows operating system. The ...

Recommended for you

How social networking sites may discriminate against women

April 20, 2018

Social media and the sharing economy have created new opportunities by leveraging online networks to build trust and remove marketplace barriers. But a growing body of research suggests that old gender and racial biases persist, ...

Virtually modelling the human brain in a computer

April 19, 2018

Neurons that remain active even after the triggering stimulus has been silenced form the basis of short-term memory. The brain uses rhythmically active neurons to combine larger groups of neurons into functional units. Until ...

'Poker face' stripped away by new-age tech

April 14, 2018

Dolby Laboratories chief scientist Poppy Crum tells of a fast-coming time when technology will see right through people no matter how hard they try to hide their feelings.

3 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

zave
not rated yet Aug 28, 2017
I hope it happens soon.Because then ai will fight each other ai.Then probably divert it's attention from destroying the human race.It would give strong ai a challenge as well.
TheGhostofOtto1923
not rated yet Aug 28, 2017
I hope it happens soon.Because then ai will fight each other ai.Then probably divert it's attention from destroying the human race.It would give strong ai a challenge as well.
AI will be met by counterAI and this is what will prompt its runaway development.

Competition is what drives evolution. Conflict among tribes is what made us human. Conflict among AI entities will eventually result in one all-encompassing AI.

A singularity.
zave
not rated yet Aug 30, 2017
How can you be certain of that conflict among ai entities will eventually result in one all-encompassing ai.
Wouldn't ai entitles struggle to be separate entities?they could spread out throughout the galaxy.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.