Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction? Philosophers and scientists at Britain's Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could "threaten our own existence," the institution said Sunday.
"In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology," Cambridge philosophy professor Huw Price said.
When that happens, "we're no longer the smartest things around," he said, and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us."
Fears that machines could overtake humans have long been the subject of science fiction—the computer HAL in the movie "2001: A Space Odyssey," for example, is one of film's best-known computer threats.
Price acknowledged that many people believe his concerns are far-fetched, but insisted the potential risks are too serious to brush away.
"It tends to be regarded as a flakey concern, but given that we don't know how serious the risks are, that we don't know the time scale, dismissing the concerns is dangerous. What we're trying to do is to push it forward in the respectable scientific community," he said.
While Price said the exact nature of the risks is difficult to predict, he said that advanced technology could be a threat when computers start to direct resources towards their own goals, at the expense of human concerns like environmental sustainability.
He compared the risk to the way humans have threatened the survival of other animals by spreading across the planet and using up natural resources that other animals depend upon.
Price is co-founding the project together with Cambridge professor of cosmology and astrophysics Martin Rees and Jann Tallinn, one of the founders of the internet phone service Skype.
The university said Sunday the center's launch is planned next year.
Explore further:
Artificial Intelligence pioneer John McCarthy dies

dogbert
1 / 5 (2) Nov 25, 2012VendicarD
2.6 / 5 (5) Nov 25, 2012We are on the path to replacing ourselves.
Machines will inherit the earth.
They may keep a few of us as pets, or relics of the past.
People may become collectors items to be collected and traded among machines. Get one of every race, creed and color for your zoo.
Freeze Dried Republicans only, of course. They are disease carriers.
nothingness
1 / 5 (1) Nov 25, 2012semmsterr
not rated yet Nov 25, 2012alfie_null
not rated yet Nov 26, 2012What point is there to worrying about it?
Shakescene21
not rated yet Nov 26, 2012On the other hand, the Singularity crowd tends to be super-optimistic about 21st Century technology, while these Cambridge professors have a darker view. Since we are looking at incredibly powerful technology it is good to remember the dark side of it.
Sanescience
not rated yet Nov 26, 2012As for semmsterr: "I for one welcome our new robot overlords!"
muggins
not rated yet Nov 28, 2012antialias_physorg
5 / 5 (1) Nov 28, 2012Which begs the question: what interests do machines have at all?
Procreation or survival doesn't seem to be an innate interest to AI (why should it care if you switch it off?).
And if survival is somehow programmed into it then that merely extends to having enough power/maintenance to continue existing (much like we have a craving for enough food). But without the procreation drive there is no chance that machines would accidentally 'outbreed' humans or significantly compete for resources on any level.
Yes: Conscious machines may not be actively benign - but I really don't see the threat level, either. Unless we're talking about active oversight of potentially harmful systems (weapons, nuclear power plants, etc.) by AI. Then purely technically optimal decisions can have biologically harmful consequences.