Is Stephen Hawking right? Could AI lead to the end of humankind?

Is Stephen Hawking right? Could AI lead to the end of humankind?
British astrophysicist Professor Stephen Hawking with his new Intel-created communications platform. Credit: EPA/Andy Rain

The famous theoretical physicist, Stephen Hawking, has revived the debate on whether our search for improved artificial intelligence will one day lead to thinking machines that will take over from us.

The British scientist made the claim during a wide-ranging interview with the BBC. Hawking has the , (ALS), and the interview touched on new technology he is using to help him communicate.

It works by modelling his previous word usage to predict what words he will use next, similar to predictive texting available on many smart phone devices.

But Professor Hawking also mentioned his concern over the development of that might surpass us.

"Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever increasing rate," he reportedly told the BBC.

"The development of full artificial intelligence could spell the end of the human race."

Could thinking machines take over?

I appreciate the issue of computers taking over (and one day ending humankind) being raised by someone as high profile, able and credible as Prof Hawking – and it deserves a quick response.

The issue of goes back at least as far as the British code-breaker and father of computer science, Alan Turing in1950, when he considered the question: "Can machines think?"

The issue of these taking over has been discussed in one way or another in a variety of popular media and culture. Think of the movies Colossus – the Forbin project (1970) andWestworld (1973), and – more recently – Skynet in the 1984 movie Terminator and sequels, to name just a few.

Common to all of these is the issue of delegating responsibility to machines. The notion of the technological singularity (or machine super-intelligence) is something which goes back at least as far as artificial intelligence pioneer, Ray Solomonoff – who, in 1967, warned:

Although there is no prospect of very intelligent machines in the near future, the dangers posed are very serious and the problems very difficult. It would be well if a large number of intelligent humans devote a lot of thought to these problems before they arise.

It is my feeling that the realization of artificial intelligence will be a sudden occurrence. At a certain point in the development of the research we will have had no practical experience with machine intelligence of any serious level: a month or so later, we will have a very intelligent machine and all the problems and dangers associated with our inexperience.

Is Stephen Hawking right? Could AI lead to the end of humankind?
When Skynet took over in the Terminator movies it sent forth killing machines to wipe out humans.Credit: EPA PHOTO/EFE/Columbia TriStar/Robert Zucker

As well as giving this variant of Hawking's warning back in 1967, in1985Solomonoffendeavoured to give a time scale for the technological singularity and reflect on social effects.

I share the concerns of Solomonoff,Hawking and others regarding the consequences of faster and more intelligent machines – but American author, computer scientist and inventor, Ray Kurzweil, is one of many seeing the benefits.

Whoever might turn out to be right (provided our planet isn't destroyed by some other danger in the meantime), I think Solomonoff was prescient in1967in advocating we devote a lot of thought to this.

Machines already taking over

In the meantime, we see increasing amounts of responsibility being delegated to machines. On the one hand, this might be hand-held calculators, routine mathematical calculations or global positioning systems (GPSs).

On the other hand, this might be systems for , guided missiles, driverless trucks on mine sites or the recent trial appearances of driverless cars on our roads.

Humans delegate responsibility to machines for reasons including improving time, cost and accuracy. But nightmares that might occur regarding damage by, say a driverless vehicle, would include legal, insurance and attribution of responsibility.

It is argued that computers might take over when their intelligence supersedes that of humans. But there are also other risks with this delegation of responsibility.

Mistakes in the machines

Some would contend that the stock market crash of 1987 was largely due to computer trading.

There have also been power grid closures due to computer error. And, at a lower level, my intrusive spell checker sometimes "corrects" what I've written into something potentially offensive. Computer error?

Hardware or software glitches can be hard to detect but they can still wreak havoc in large-scale systems – even without hackers or malevolent intent, and probably more so with them. So, just how much can we really trust machines with large responsibilities to do a better job than us?

Even without computers consciously taking control, I can envisage a variety of paths whereby computer systems go out of control. These systems might be so fast with such small componentry that it might be hard to remedy and even hard to turn off.

Partly in the spirit of Solomonoff's 1967 paper, I'd like to see scriptwriters and researchers collaborating to set out such scenarios – further stimulating public discussion.

As but one possible scenario, maybe some speech gets converted badly to text, worsened in a bad automatic translation, leading to a subtle corruption of machine instructions, leading to whatever morass.

A perhaps related can of worms might come from faster statistical and machine learning analysis of big data on human brains. (And, as some would dare to add, are we humans the bastions of all that is good, moral and right?)

As Solomonoff said in 1967, we need this public discussion – and, given the stakes, I think we now need it soon.


Explore further

Hawking warns AI 'could spell end of human race'

This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).The Conversation

Citation: Is Stephen Hawking right? Could AI lead to the end of humankind? (2014, December 4) retrieved 17 September 2019 from https://phys.org/news/2014-12-stephen-hawking-ai-humankind.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
0 shares

Feedback to editors

User comments

Dec 04, 2014
"Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever increasing rate,"

Even redesign has its limits. While it may well augment itself I'm sure that this isn't an endless/runaway process.

I can envisage a variety of paths whereby computer systems go out of control.

No doubt. But we're talking here about intentionally going out of control for malign purposes - and that is something I don't see (most of all for a lack of motivation that could lead to such behavior on a AI's part).

So I'd argue that AI is likely going to be benign (as it costs an AI nothing to fulfill tasks - as opposed to human workers where there is a cost in percentage of lifespan, health, and forfeiture of time spent on stuff that is more fun)

...or even more likely AI will be indifferent to the point where it will remove itself from humanity.

Dec 04, 2014
the complete breakdown of common sense occurs EVEN IN SCIENTISTS!

issac newtown himself died broke and in debt because he got sucked into predicting the future .

predictionism holds no water when scientists are not actively building the technology they are predicting will 'change everything'.

and even then---technology is not predictable. vint cerf 'invented the internet'. and even he admits openly to being shocked at how many cat videos the internet has been used to deliver from person to person! why? because just cause he 'invented' it doesn't mean he knows everything about how the millions of other developers and inventors will change and add to it.

technology aggregates. watson and crick didn't discover 'everything' about dna. scientific knowledge comes at best in 'threads' of our reality. never will anything be understood in one fell swoop, nor will an invention be 'unleashed' in its complete form , for just as life evolves, technology is evolving as we build it.

Dec 04, 2014
Well in Battlestar Galactica the main defense grid security analyst is seduced by the hot model and willingly surrenders humanity. A seamless handover of power doesn't need to be nightmarish, at least for the sellout. There are many humans spited by their own kind and betrayed since childhood who would willingly surrender humankind for some temporary attention or seduction. With the introduction of "pleasure model" robots onto the market, this factor should not be neglected

Dec 04, 2014
This comment has been removed by a moderator.

Dec 04, 2014
Well in Battlestar Galactica the main defense grid security analyst is seduced by the hot model and willingly surrenders humanity. A seamless handover of power doesn't need to be nightmarish, at least for the sellout. There are many humans spited by their own kind and betrayed since childhood who would willingly surrender humankind for some temporary attention or seduction. With the introduction of "pleasure model" robots onto the market, this factor should not be neglected


The solution then would obviously be to engineer benign robot concubines for these socially-aggrieved miscreants; adoring, faithful love pets that exist only to serve and can pacify their darkest urges for revenge against a society that's spurned them. Phwoar, i'd get five, i can tell you..

Dec 04, 2014
@teslaberry - good point; black swans. The creators of the transistor never intended their invention to expedite the delivery of high def pR0n, even though that's what ~40% of them are now doing, giggity.

Dec 04, 2014
Anyway, why would a super AI want to eliminate us, rather than further exploit our abilities? I predict a mutually beneficial symbiosis...

(Giggity.)

Dec 05, 2014
Linus Pauling: vitamin C

William Shockley: race, I.Q. and eugenics

Stephen Hawking: A.I.

Herein, there ought to be a lesson.

Dec 07, 2014
this is the second time that Hawking has envisioned our doom.last time it was contact by extraterrestrial life.i have a feeling what is going on here is that Dr Hawking is subconsciously becoming aware of his imminent death and is expressing that in these predication's of ,not his,but mankind's end.

Dec 07, 2014
"Is Stephen Hawking right?" Could AI lead to the end of humankind?

No. He is wrong. AI does not lead to the end of mankind. People are not able to make smart machines than themselves. These are pure speculation per kilogram without any justification.

Dec 07, 2014
This comment has been removed by a moderator.

Dec 07, 2014
"IMO they can do it due to collective effort."

This do not help. Here there is need for quality but not quantifiable advantage.

Dec 09, 2014
CPU and autonomous machines are built by man. Consciousness or self aware for a computing device without the human chemistry will probably be emotionless. The terror then will be within the flaw man places upon the creation. I would prefer a fuzzy control set where all things are defined by a given test of truth. Therefore actions may reference a set of morales and certain responses would absolutely prevented. However, the evil mind is devious. Its not the creation we should worry over, it's the creator!

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more