Cambridge to study technology's risk to humans

Nov 25, 2012 by Sylvia Hui

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction? Philosophers and scientists at Britain's Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could "threaten our own existence," the institution said Sunday.

"In the case of , it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology," Cambridge philosophy professor Huw Price said.

When that happens, "we're no longer the smartest things around," he said, and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us."

Fears that machines could overtake humans have long been the subject of —the computer HAL in the movie "2001: A Space Odyssey," for example, is one of film's best-known computer threats.

Price acknowledged that many people believe his concerns are far-fetched, but insisted the potential risks are too serious to brush away.

"It tends to be regarded as a flakey concern, but given that we don't know how serious the risks are, that we don't know the , dismissing the concerns is dangerous. What we're trying to do is to push it forward in the respectable scientific community," he said.

While Price said the exact nature of the risks is difficult to predict, he said that advanced technology could be a threat when computers start to direct resources towards their own goals, at the expense of human concerns like .

He compared the risk to the way humans have threatened the survival of other animals by spreading across the planet and using up natural resources that other animals depend upon.

Price is co-founding the project together with Cambridge professor of and astrophysics Martin Rees and Jann Tallinn, one of the founders of the internet phone service Skype.

The university said Sunday the center's launch is planned next year.

Explore further: Communication-optimal algorithms for contracting distributed tensors

4.3 /5 (6 votes)
add to favorites email to friend print save as pdf

Related Stories

Morality for robots?

Aug 29, 2012

On the topic of computers, artificial intelligence and robots, Northern Illinois University Professor David Gunkel says science fiction is fast becoming "science fact."

How to engineer intelligence

Mar 20, 2012

"Do we actually want machines to interact with humans in an emotional way? Will it be possible for them to interact with us?"

Expert: AI computers by 2020

Feb 17, 2008

A U.S. computer expert predicts computers will have the same intellectual capacity as humans by 2020.

Keeping tabs on Skynet

Sep 12, 2011

(PhysOrg.com) -- In line with the predictions of science fiction, computers are getting smarter. Now, scientists are on the way to devising a test to ascertain how close Artificial Intelligence (AI) is coming ...

Recommended for you

Designing exascale computers

Jul 23, 2014

"Imagine a heart surgeon operating to repair a blocked coronary artery. Someday soon, the surgeon might run a detailed computer simulation of blood flowing through the patient's arteries, showing how millions ...

User comments : 9

Adjust slider to filter visible comments by rank

Display comments: newest first

dogbert
1 / 5 (2) Nov 25, 2012
At the very least, such studies are premature. We don't know enough about intelligence to create an intelligent machine. Until/unless we understand our own self awareness, it is highly unlikely that we will produce machines with self awareness.

VendicarD
2.6 / 5 (5) Nov 25, 2012
Nature isn't intelligent enough to create intelligent machines either.

We are on the path to replacing ourselves.

Machines will inherit the earth.

They may keep a few of us as pets, or relics of the past.

People may become collectors items to be collected and traded among machines. Get one of every race, creed and color for your zoo.

Freeze Dried Republicans only, of course. They are disease carriers.

nothingness
1 / 5 (1) Nov 25, 2012
Smart technology for dumb people
semmsterr
not rated yet Nov 25, 2012
Given that the body of human knowledge increases apace and breakthroughs by their very nature tend to be sudden, this is not something to be flippant or complacent about. Having said that, I'm looking forward to it. Our track record cries out for new players on the field.
alfie_null
not rated yet Nov 26, 2012
Why would it want to take over the world? Or to put it another way, how likely is it that we will be competing for the same resources, and how likely is it that it (the A.I.) will be competitive in any case?

What point is there to worrying about it?
Shakescene21
not rated yet Nov 26, 2012
It seems that Cambridge U. is trying to compete with Singularity University. If so, I think Singularity U. has a head start, more energy, and more focus. So this program will probably be a runt compared to the Singularity movement.

On the other hand, the Singularity crowd tends to be super-optimistic about 21st Century technology, while these Cambridge professors have a darker view. Since we are looking at incredibly powerful technology it is good to remember the dark side of it.
Sanescience
not rated yet Nov 26, 2012
The complexity of such developments will make "academic" any thought experiments about it before it becomes possible. The current trend is that our technology will be integrated into ourselves. The line between "us" and "them" will be a blurry one.

As for semmsterr: "I for one welcome our new robot overlords!"
muggins
not rated yet Nov 28, 2012
I think we should be pushing AI as far as it can go, but I agree that considerations should be made on the risk of such machines surpassing human intelligence. It would be fair to assume super intelligent machines will have learning systems capable of creating and modifying its learning algorithms based on information and knowledge as it learns. This could mean a machines goal may change over time in a way that negatively effects humans. The threat would probably increase if super intelligent machines were building the next version of themselves as well. This also opens economical and ethical questions about the need for corporations to hire human beings if they can build machines more intelligent.
antialias_physorg
5 / 5 (1) Nov 28, 2012
machines that are not malicious, but machines whose interests don't include us

Which begs the question: what interests do machines have at all?

Procreation or survival doesn't seem to be an innate interest to AI (why should it care if you switch it off?).

And if survival is somehow programmed into it then that merely extends to having enough power/maintenance to continue existing (much like we have a craving for enough food). But without the procreation drive there is no chance that machines would accidentally 'outbreed' humans or significantly compete for resources on any level.

Yes: Conscious machines may not be actively benign - but I really don't see the threat level, either. Unless we're talking about active oversight of potentially harmful systems (weapons, nuclear power plants, etc.) by AI. Then purely technically optimal decisions can have biologically harmful consequences.