Cambridge to study technology's risk to humans

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction? Philosophers and scientists at Britain's Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could "threaten our own existence," the institution said Sunday.

"In the case of , it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology," Cambridge philosophy professor Huw Price said.

When that happens, "we're no longer the smartest things around," he said, and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us."

Fears that machines could overtake humans have long been the subject of —the computer HAL in the movie "2001: A Space Odyssey," for example, is one of film's best-known computer threats.

Price acknowledged that many people believe his concerns are far-fetched, but insisted the potential risks are too serious to brush away.

"It tends to be regarded as a flakey concern, but given that we don't know how serious the risks are, that we don't know the , dismissing the concerns is dangerous. What we're trying to do is to push it forward in the respectable scientific community," he said.

While Price said the exact nature of the risks is difficult to predict, he said that advanced technology could be a threat when computers start to direct resources towards their own goals, at the expense of human concerns like .

He compared the risk to the way humans have threatened the survival of other animals by spreading across the planet and using up natural resources that other animals depend upon.

Price is co-founding the project together with Cambridge professor of and astrophysics Martin Rees and Jann Tallinn, one of the founders of the internet phone service Skype.

The university said Sunday the center's launch is planned next year.

Explore further

Artificial Intelligence pioneer John McCarthy dies

Copyright 2012 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Citation: Cambridge to study technology's risk to humans (2012, November 25) retrieved 18 September 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Nov 25, 2012
At the very least, such studies are premature. We don't know enough about intelligence to create an intelligent machine. Until/unless we understand our own self awareness, it is highly unlikely that we will produce machines with self awareness.

Nov 25, 2012
Nature isn't intelligent enough to create intelligent machines either.

We are on the path to replacing ourselves.

Machines will inherit the earth.

They may keep a few of us as pets, or relics of the past.

People may become collectors items to be collected and traded among machines. Get one of every race, creed and color for your zoo.

Freeze Dried Republicans only, of course. They are disease carriers.

Nov 25, 2012
Smart technology for dumb people

Nov 25, 2012
Given that the body of human knowledge increases apace and breakthroughs by their very nature tend to be sudden, this is not something to be flippant or complacent about. Having said that, I'm looking forward to it. Our track record cries out for new players on the field.

Nov 26, 2012
Why would it want to take over the world? Or to put it another way, how likely is it that we will be competing for the same resources, and how likely is it that it (the A.I.) will be competitive in any case?

What point is there to worrying about it?

Nov 26, 2012
It seems that Cambridge U. is trying to compete with Singularity University. If so, I think Singularity U. has a head start, more energy, and more focus. So this program will probably be a runt compared to the Singularity movement.

On the other hand, the Singularity crowd tends to be super-optimistic about 21st Century technology, while these Cambridge professors have a darker view. Since we are looking at incredibly powerful technology it is good to remember the dark side of it.

Nov 26, 2012
The complexity of such developments will make "academic" any thought experiments about it before it becomes possible. The current trend is that our technology will be integrated into ourselves. The line between "us" and "them" will be a blurry one.

As for semmsterr: "I for one welcome our new robot overlords!"

Nov 28, 2012
I think we should be pushing AI as far as it can go, but I agree that considerations should be made on the risk of such machines surpassing human intelligence. It would be fair to assume super intelligent machines will have learning systems capable of creating and modifying its learning algorithms based on information and knowledge as it learns. This could mean a machines goal may change over time in a way that negatively effects humans. The threat would probably increase if super intelligent machines were building the next version of themselves as well. This also opens economical and ethical questions about the need for corporations to hire human beings if they can build machines more intelligent.

Nov 28, 2012
machines that are not malicious, but machines whose interests don't include us

Which begs the question: what interests do machines have at all?

Procreation or survival doesn't seem to be an innate interest to AI (why should it care if you switch it off?).

And if survival is somehow programmed into it then that merely extends to having enough power/maintenance to continue existing (much like we have a craving for enough food). But without the procreation drive there is no chance that machines would accidentally 'outbreed' humans or significantly compete for resources on any level.

Yes: Conscious machines may not be actively benign - but I really don't see the threat level, either. Unless we're talking about active oversight of potentially harmful systems (weapons, nuclear power plants, etc.) by AI. Then purely technically optimal decisions can have biologically harmful consequences.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more