Cornell joins pleas for responsible AI research

The phrase "artificial intelligence" saturates Hollywood dramas – from computers taking over spaceships, to sentient robots overpowering humans. Though the real world is perhaps more boring than Hollywood, artificial intelligence (AI) is a rapidly expanding academic and technological field, and Cornell scientists are playing a major role in it.

Computer scientists Bart Selman and Carla Gomes, professors in the Department of Computer Science, are among those joining a growing chorus of academic and industry experts eager to harness the bright future of AI research, while remaining responsibly vigilant to its potential pitfalls.

Selman and Gomes helped write an open letter issued earlier this year by the Future of Life Institute, an organization that studies the risks of developing AI and other technologies. The letter urges scientists, policymakers and the public to explore the opportunities and risks associated with increasingly intelligent machines. It was signed by nearly 10,000 concerned scientists and others – among them physicist Stephen Hawking and billionaire philanthropist Elon Musk. Hadas Kress-Gazit, Cornell assistant professor of mechanical engineering and co-director of the Autonomous Systems Lab, also signed the letter.

Broadly, AI is the study of machines and software that can learn from and adapt to their environments. Large companies like Facebook and Google have made multi-million dollar investments into the study of AI; notably, Google is working on the self-driving car. The Amazon Echo system is an example of AI voice recognition software and digital assistant technology that's come a long way in the last 10 years, Selman said.

Musk recently pledged $10 million to support research on keeping AI beneficial to humans, a portion of which will fund a new project led by Selman and Gomes.

With their grant, they will predict whether a level of AI known as super intelligence – the surpassing of human intelligence by a machine – might be possible, and if so, when it might be achieved. The answer might be closer on the horizon than it seems. For example, facial recognition technology is already "superhuman" in some ways, Selman said. "Facebook can recognize faces better than any of us," he said.

"What we are seeing is a broad-scale adoption of these technologies," Selman continued. "Moreover, even though robots are still too expensive for general use, it is expected that new technologies will bring the cost of robotics down rapidly over the next two decades."

A nearer-term, possibly under-analyzed problem, Selman said, is the economic effect the world will feel from AI advancements. A robot that cleans a house might be a ways off, but cloud computing, simultaneous translation and other AI technologies are already bringing about societal changes. "We've arguably reached a point where technology is taking away more jobs than it's creating," Selman said.

What's more, "deep learning," a subset of machine learning in which large, neural networks are trained to think coherently, is responsible for leaps and bounds in the advancement of speech recognition and computer vision.

Some of the most famous experiments of man versus machine come from the world of chess. The computer Deep Blue, first developed in the 1980s at Carnegie Mellon University and acquired by IBM, went head to head against chess champion Garry Kasparov.

"In the big picture, chess is a baby problem," Gomes said. "It's a small board with a few pieces. Now imagine the , where you have hundreds, thousands of self-driving cars, and people. It's an exponentially larger domain."


Explore further

Teaching human values to artificial intelligences

Provided by Cornell University
Citation: Cornell joins pleas for responsible AI research (2015, August 27) retrieved 18 September 2019 from https://phys.org/news/2015-08-cornell-pleas-responsible-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
19 shares

Feedback to editors

User comments

Aug 27, 2015
It is important to understand the difference between the types of AI being discussed. ANI is what this article is talking about. This is narrow intelligence. Back in 1980s we built Mathematica which could outthink mathemeticians in solving differential equations and other algebraic manipulation. Since then we have built a number of algorithms which when programmed with the sum of all human knowledge and the speed of a computer can do things which are better than any human. This is nothing new or scary. Sure, doing more and more of them will mean possibly some people out of work but not threatening in the sense of killing us.

AGI which is general intelligence is NO WAY closer because of deep learning. Read my blog articles on this which ellucidate what is being deceptively talked about here. We are a long way.

http://cloudrambl...e-setup/

Aug 27, 2015
Or even better jump to the last article in the series which talks specifically about what deep learning is missing (a lot) to become AGI. http://cloudrambl...ligence/

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more