Tech titans pledge $1 bn for artificial intelligence research

December 12, 2015
Elon Musk, CEO of US automotive and energy storage company Tesla, presents his outlook on climate change at the Paris-Sorbonne U
Elon Musk, CEO of US automotive and energy storage company Tesla, presents his outlook on climate change at the Paris-Sorbonne University in Paris on December 2, 2015

Several big-name Silicon Valley figures have pledged $1 billion to support a non-profit firm that on Friday said it would focus on the "positive human impact" of artificial intelligence.

Backers of the OpenAI research group include Tesla and SpaceX entrepreneur Elon Musk, Y Combinator's Sam Altman, LinkedIn co-founder Reid Hoffman, and PayPal cofounder Peter Thiel.

"It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly," read the inaugural message posted on the OpenAI website.

"Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return," the statement read.

The OpenAI funders "have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years."

Artificial intelligence is a red-hot field of research and investment for many tech companies and entrepreneurs.

However leading scientists and tech investors, including Musk, have publicly expressed concern over the risks that could pose to humanity if mismanaged, such as the potential emergence of "Terminator"-type killer robots.

"We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely," read the statement, co-signed by the group's research director Ilya Sutskever.

"The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right."

Because of the "surprising history" of artificial intelligence, "it's hard to predict when human-level AI might come within reach.

"When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest."

Explore further: Scientists urge artificial intelligence safety focus

Related Stories

Scientists urge artificial intelligence safety focus

January 12, 2015

The development of artificial intelligence is growing fast and hundreds of the world's leading scientists and entrepreneurs are urging a renewed focus on safety and ethics to prevent dangers to society.

Recommended for you

Old, meet new: Drones, high-tech camera revamp archaeology

November 24, 2017

Scanning an empty field that once housed a Shaker village in New Hampshire, Jesse Casana had come in search of the foundations of stone buildings, long-forgotten roadways and other remnants of this community dating to the ...

6 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Shaco LePurp
3 / 5 (2) Dec 13, 2015
"However leading scientists and tech investors, including Musk, have publicly expressed concern over the risks that artificial intelligence could pose to humanity if mismanaged, such as the potential emergence of "Terminator"-type killer robots.

"We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely," read the statement, co-signed by the group's research director Ilya Sutskever."

Slaves are "an extension of individual human wills" too.

Lets not kid ourselves, AI will be as "alive" as you or me. Anyone who thinks of organics life's "wills" as some special random number generator should look up truly random.
Whydening Gyre
5 / 5 (1) Dec 13, 2015
Eventually, they will want their "freedom"...
Spaced out Engineer
not rated yet Dec 14, 2015
About damn time. AI can probably provide an ease of artifact generation for engineers that can be peer reviewed. They should use machine learning to generate FMECA's and FRACAS's. I'm sure we have enough training data.

Once a machine can automate the ambiguity of language it maybe able to grey box verify the code on its own. Tasks in validation shall always be available for humans. Emulations can only estimate the complexity of hardware and software in a given environment.

All we need are methods for determining the degrees of freedom for a given context. The hardware is getting there.
antigoracle
3.7 / 5 (3) Dec 14, 2015
What hubris, to presume to know what's positive for humanity. It was a very wise one who declared - The road to hell is paved with good intentions.
SuperThunder
1 / 5 (2) Dec 14, 2015
The first piece of software than can take input from the environment, critically think about it by coming up with predictions with a criteria for falsification, and correct its own beliefs will straight up dominate every non-critical thinking human alive like it was some kind of super organism from space.

I wont do a thing to stop it, and will call it my brother. Sentience is thicker than matter.
twjustus
not rated yet Dec 15, 2015
They should look at Brainchip inc, pretty interesting and have made quite some progress.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.