Scientists urge artificial intelligence safety focus

Roboy, a humanoid robot developed at the University of Zurich,at the 2014 CeBIT technology trade fair on March 9, 2014 in Hanove
Roboy, a humanoid robot developed at the University of Zurich,at the 2014 CeBIT technology trade fair on March 9, 2014 in Hanover, Germany

The development of artificial intelligence is growing fast and hundreds of the world's leading scientists and entrepreneurs are urging a renewed focus on safety and ethics to prevent dangers to society.

An open letter was signed by famous physicist Stephen Hawking, Skype co-founder Jaan Tallinn, and SpaceX CEO Elon Musk along with some of the top minds from universities such as Harvard, Stanford, Massachusetts Institute of Technology (MIT), Cambridge, and Oxford, and companies like Google, Microsoft and IBM.

"There is now a broad consensus that (AI) research is progressing steadily, and that its impact on society is likely to increase," the letter said.

"The potential benefits are huge, since everything that civilization has to offer is a product of ; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable," it added.

"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

How to handle the prospect of automatic weapons that might kill indiscriminately, the liabilities of automatically driven cars and the prospect of losing control of AI systems so that they no longer align with human wishes, were among the concerns raised in the letter that signees said deserve further research.

The full text appears at futureoflife.org/misc/open_letter.


Explore further

Researchers examines the true state of artificial intelligence

© 2015 AFP

Citation: Scientists urge artificial intelligence safety focus (2015, January 12) retrieved 23 August 2019 from https://phys.org/news/2015-01-scientists-urge-artificial-intelligence-safety.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
2250 shares

Feedback to editors

User comments

Jan 12, 2015
I suggest it should be monitored automatically, by a suitably capable AI program.

Jan 12, 2015
It will happen when we need to explore/exploit the resources of some off Earth place where people cannot really go and survive for long, like Venus or inner gas clouds of Jupiter. So we will send Robominer and his/hers/its crew on their own with a set of duties, etc. They are smart; they get tired of being 'the expendables' and go on strike! Which human will stick his/her head into her/his own doom to be a scab? It is mathematically certain that some incident will go down this way as it is the nature of 'conservative businessmen' to devalue the lives of their employees wayyy below their own alabaster soul-less butts. If humans want to have an iron code of ethics imposed on artificial intelligence, they need only look at themselves to see it enforced more in breach than in substance. God save us all!!

Jan 13, 2015
I think it has to be recognised that our instinct for survival is a product of evolution. Although we may use evolution in some way to help the development of AI, survival need not be an AI's highest priority.


Jan 13, 2015
Apache helicopter gunships are built for 'survival' if hit (Its just called multiple system redundancy) and have the "arrowhead" target acquisition and designation system.
http://en.wikiped...rrowhead
If the guys at google can make a car automatous just guess what the guys at "skunk works" are dreaming up for the next generation of helicopter gunships/drones.


Jan 13, 2015
Don't you think the guys at the pentagon have wet dreams thinking of 'terminators', carrying Gatling machine guns like they were .38 revolvers, going after ISIL?

Jan 13, 2015
Don't you think the guys at the pentagon have wet dreams thinking of 'terminators', carrying Gatling machine guns like they were .38 revolvers, going after ISIL?

A good indicator of our own human ethos...

Jan 13, 2015
Given advanced AI will be primarily owned by the super wealthy who are essentially uncontrolled, cynicism may be justifiable.

Jan 13, 2015
Only in the ignorance of stupidity is it even possible to fear intelligence.

Jan 13, 2015
Oh, okayeei, never mind. I thought this was the article about some of the not really scientist-Skippys on the phyorg comment boards.

Jan 13, 2015
Only in the ignorance of stupidity is it even possible to fear intelligence.

Intelligent doesn't equate to benevolent as you suggest.
Adolf Hitler may well have been highly intelligent.

Jan 13, 2015
Only in the ignorance of stupidity is it even possible to fear intelligence.

Intelligent doesn't equate to benevolent as you suggest.
Adolf Hitler may well have been highly intelligent.

Only from the depths of stupidity is it possible to suggest Hitler was intelligent. There is a "most intelligent" definition of intelligence, and I don't think you're using it...

Jan 13, 2015
In anticipation of any flames, here is what I would call a "more intelligent" definition of intelligence: http://michaelsch...t-s.html

I say this because it's mathematical and scientific. To prove it wrong, let's see your maths and science. Or improve it with more terms and let's measure and test the results...

@Uncle Ira – the above blog was written by a physicist-Skippy :)

Jan 13, 2015
@Uncle Ira – the above blog was written by a physicist-Skippy :)


That's why I put the never mind in there. See at first I thought it might be the article about the scientist-Skippys warning everybody to be on the lookout for the not-scientist-Skippys (artificial intelligence) that are always littering up the comment board with the crankpot stuffs.

After I got down into to article I realized it wasn't about fake-smart peoples like Bennie-Skippy or returnering-Skippy or cantdrive-Skippy or Really-Skippy, it was about computers.

Jan 13, 2015
Thanks, Ira, keep fighting the good fight, cher :)

Jan 13, 2015
Thanks, Ira, keep fighting the good fight, cher :)


It ain't so much the fight with some of these couyons, it's more like playing the tick-tock-&toe game if you have the first go.

Jan 13, 2015
@EyeNStein – so apply the above definition to Hitler using a suitable value for τ and we see his choices/actions left him with only two options: suicide or capture. And his last choice/action left him with exactly zero options. So much for his IQ, anyway.

Jan 13, 2015
Adolf Hitler may well have been highly intelligent.


If so then we should fear AI, ha, ha, ha!

Jan 13, 2015
Thanks, Ira, keep fighting the good fight, cher :)


It ain't so much the fight with some of these couyons, it's more like playing the tick-tock-&toe game if you have the first go.

Excellent analogy.

Jan 13, 2015
Adolf Hitler may well have been highly intelligent.

Above average, maybe... but not highly.


Jan 17, 2015
Didn't Isaac Asimov have this covered back in the 1930's? Of course, the difficulty is in the implementation, not in the concept.
.
Of course, a really, really "smart" bomb would never explode, would it?

Jan 19, 2015
A 'smart' bomb could still explode if it believed that it made the world a better place by doing so.
This paints a chilling picture of people (or other AI's) propagandising bombs to make them explode.... Sounds like another supposedly smart species.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more