Scientists urge artificial intelligence safety focus

January 12, 2015
Roboy, a humanoid robot developed at the University of Zurich,at the 2014 CeBIT technology trade fair on March 9, 2014 in Hanover, Germany

The development of artificial intelligence is growing fast and hundreds of the world's leading scientists and entrepreneurs are urging a renewed focus on safety and ethics to prevent dangers to society.

An open letter was signed by famous physicist Stephen Hawking, Skype co-founder Jaan Tallinn, and SpaceX CEO Elon Musk along with some of the top minds from universities such as Harvard, Stanford, Massachusetts Institute of Technology (MIT), Cambridge, and Oxford, and companies like Google, Microsoft and IBM.

"There is now a broad consensus that (AI) research is progressing steadily, and that its impact on society is likely to increase," the letter said.

"The potential benefits are huge, since everything that civilization has to offer is a product of ; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable," it added.

"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

How to handle the prospect of automatic weapons that might kill indiscriminately, the liabilities of automatically driven cars and the prospect of losing control of AI systems so that they no longer align with human wishes, were among the concerns raised in the letter that signees said deserve further research.

The full text appears at futureoflife.org/misc/open_letter.

Explore further: Researchers examines the true state of artificial intelligence

Related Stories

Artificial intelligence: Hawking's fears stir debate

December 6, 2014

There was the psychotic HAL 9000 in "2001: A Space Odyssey," the humanoids which attacked their human masters in "I, Robot" and, of course, "The Terminator", where a robot is sent into the past to kill a woman whose son will ...

Recommended for you

Volumetric 3-D printing builds on need for speed

December 11, 2017

While additive manufacturing (AM), commonly known as 3-D printing, is enabling engineers and scientists to build parts in configurations and designs never before possible, the impact of the technology has been limited by ...

Tech titans ramp up tools to win over children

December 10, 2017

From smartphone messaging tailored for tikes to computers for classrooms, technology titans are weaving their way into childhoods to form lifelong bonds, raising hackles of advocacy groups.

Mapping out a biorobotic future  

December 8, 2017

You might not think a research area as detailed, technically advanced and futuristic as building robots with living materials would need help getting organized, but that's precisely what Vickie Webster-Wood and a team from ...

21 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

TheGhostofOtto1923
3.8 / 5 (4) Jan 12, 2015
I suggest it should be monitored automatically, by a suitably capable AI program.
Osiris1
2.3 / 5 (3) Jan 12, 2015
It will happen when we need to explore/exploit the resources of some off Earth place where people cannot really go and survive for long, like Venus or inner gas clouds of Jupiter. So we will send Robominer and his/hers/its crew on their own with a set of duties, etc. They are smart; they get tired of being 'the expendables' and go on strike! Which human will stick his/her head into her/his own doom to be a scab? It is mathematically certain that some incident will go down this way as it is the nature of 'conservative businessmen' to devalue the lives of their employees wayyy below their own alabaster soul-less butts. If humans want to have an iron code of ethics imposed on artificial intelligence, they need only look at themselves to see it enforced more in breach than in substance. God save us all!!
marcush
not rated yet Jan 13, 2015
I think it has to be recognised that our instinct for survival is a product of evolution. Although we may use evolution in some way to help the development of AI, survival need not be an AI's highest priority.

EyeNStein
1 / 5 (1) Jan 13, 2015
Apache helicopter gunships are built for 'survival' if hit (Its just called multiple system redundancy) and have the "arrowhead" target acquisition and designation system.
http://en.wikiped...rrowhead
If the guys at google can make a car automatous just guess what the guys at "skunk works" are dreaming up for the next generation of helicopter gunships/drones.

EyeNStein
1 / 5 (1) Jan 13, 2015
Don't you think the guys at the pentagon have wet dreams thinking of 'terminators', carrying Gatling machine guns like they were .38 revolvers, going after ISIL?
Whydening Gyre
5 / 5 (2) Jan 13, 2015
Don't you think the guys at the pentagon have wet dreams thinking of 'terminators', carrying Gatling machine guns like they were .38 revolvers, going after ISIL?

A good indicator of our own human ethos...
Job001
5 / 5 (1) Jan 13, 2015
Given advanced AI will be primarily owned by the super wealthy who are essentially uncontrolled, cynicism may be justifiable.
Protoplasmix
3 / 5 (4) Jan 13, 2015
Only in the ignorance of stupidity is it even possible to fear intelligence.
Uncle Ira
2.3 / 5 (3) Jan 13, 2015
Oh, okayeei, never mind. I thought this was the article about some of the not really scientist-Skippys on the phyorg comment boards.
EyeNStein
5 / 5 (2) Jan 13, 2015
Only in the ignorance of stupidity is it even possible to fear intelligence.

Intelligent doesn't equate to benevolent as you suggest.
Adolf Hitler may well have been highly intelligent.
Protoplasmix
1 / 5 (2) Jan 13, 2015
Only in the ignorance of stupidity is it even possible to fear intelligence.

Intelligent doesn't equate to benevolent as you suggest.
Adolf Hitler may well have been highly intelligent.

Only from the depths of stupidity is it possible to suggest Hitler was intelligent. There is a "most intelligent" definition of intelligence, and I don't think you're using it...
Protoplasmix
3 / 5 (2) Jan 13, 2015
In anticipation of any flames, here is what I would call a "more intelligent" definition of intelligence: http://michaelsch...t-s.html

I say this because it's mathematical and scientific. To prove it wrong, let's see your maths and science. Or improve it with more terms and let's measure and test the results...

@Uncle Ira – the above blog was written by a physicist-Skippy :)
Uncle Ira
3 / 5 (2) Jan 13, 2015
@Uncle Ira – the above blog was written by a physicist-Skippy :)


That's why I put the never mind in there. See at first I thought it might be the article about the scientist-Skippys warning everybody to be on the lookout for the not-scientist-Skippys (artificial intelligence) that are always littering up the comment board with the crankpot stuffs.

After I got down into to article I realized it wasn't about fake-smart peoples like Bennie-Skippy or returnering-Skippy or cantdrive-Skippy or Really-Skippy, it was about computers.
Protoplasmix
5 / 5 (1) Jan 13, 2015
Thanks, Ira, keep fighting the good fight, cher :)
Uncle Ira
3.7 / 5 (3) Jan 13, 2015
Thanks, Ira, keep fighting the good fight, cher :)


It ain't so much the fight with some of these couyons, it's more like playing the tick-tock-&toe game if you have the first go.
Protoplasmix
3 / 5 (2) Jan 13, 2015
@EyeNStein – so apply the above definition to Hitler using a suitable value for τ and we see his choices/actions left him with only two options: suicide or capture. And his last choice/action left him with exactly zero options. So much for his IQ, anyway.
stripeless_zebra
not rated yet Jan 13, 2015
Adolf Hitler may well have been highly intelligent.


If so then we should fear AI, ha, ha, ha!
Whydening Gyre
5 / 5 (1) Jan 13, 2015
Thanks, Ira, keep fighting the good fight, cher :)


It ain't so much the fight with some of these couyons, it's more like playing the tick-tock-&toe game if you have the first go.

Excellent analogy.
Whydening Gyre
not rated yet Jan 13, 2015
Adolf Hitler may well have been highly intelligent.

Above average, maybe... but not highly.

PhotonX
3 / 5 (2) Jan 17, 2015
Didn't Isaac Asimov have this covered back in the 1930's? Of course, the difficulty is in the implementation, not in the concept.
.
Of course, a really, really "smart" bomb would never explode, would it?
EyeNStein
not rated yet Jan 19, 2015
A 'smart' bomb could still explode if it believed that it made the world a better place by doing so.
This paints a chilling picture of people (or other AI's) propagandising bombs to make them explode.... Sounds like another supposedly smart species.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.