Stephen Hawking opens British artificial intelligence hub (Update)

October 19, 2016

British scientist Stephen Hawking arrives to attend the launch of The Leverhulme Centre for the Future of Intelligence (CFI), at the University of Cambridge, in Cambridge, eastern England
Professor Stephen Hawking on Wednesday opened a new artificial intelligence research centre at Britain's Cambridge University.

The Leverhulme Centre for the Future of Intelligence (CFI) will delve into AI applications ranging from increasingly "smart" smartphones to robot surgeons and "Terminator" style military droids.

Funded by a £10 million (11.2 million-euro, $12.3-million) grant from the Leverhulme Trust, the centre's express aim is to ensure AI is used to benefit humanity.

Opening the new centre, Hawking said it was not possible to predict what might be achieved with AI.

"Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one—industrialisation.

"And surely we will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed.

"In short, success in creating AI could the biggest event in the history of our civilisation," Hawking said.

The centre is a collaboration between the universities of Oxford, Cambridge, Imperial College London, and Berkeley, California.

It will bring together researchers from multiple disciplines to work with industry representatives and policymakers on projects ranging from regulation of autonomous weapons to the implications of AI for democracy.

"AI is hugely exciting. Its practical applications can help us to tackle important social problems, as well as easing many tasks in everyday life," said Margaret Boden, a professor of cognitive sciences and consultant to the CFI.

The technology has led to major advances in "the sciences of mind and life", she said, but, misused, also "presents grave dangers".

"CFI aims to pre-empt these dangers, by guiding AI-development in human-friendly ways," she added.

Fears of robots freeing themselves from their creators have inspired a host of films and literature—"2001: a Space Odyssey" to name but one.

Hawking warned technological developments also posed a risk to our civilisation.

"Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.

"It will bring disruption to our economy. And in the future, AI could develop a will of its own—a will that is in conflict with ours," he said.

But catastrophic scenarios aside, the development of AI, which allows robots to execute almost all human tasks, directly threatens millions of jobs.

Freedom or destruction?

So will AI, which has already conquered man in the game of chess, ultimately leave humans on the sidelines?

"We don't need to see AI as replacing us, but can see it as enhancing us: we will be able to make better decisions, on the basis of better evidence and better insights," said Stephen Cave, director of the centre.

"AI will help us to learn about ourselves and our environment—and could, if managed well, be liberating."

With this in mind, ethics will be one of the key fields of research of the CFI.

"It's about how to ensure intelligent artificial systems have goals aligned with human values" and ensure computers don't evolve spontaneously in "new, unwelcome directions", Cave said.

"Before we delegate decisions in important areas, we need to be very sure that the intelligence systems to which we are delegating are sufficiently trustworthy."

The opening of the research centre comes at a time when major international groups have competing ambitions in AI.

Google has integrated the technology in its new phone, Apple and Microsoft are proposing personal assistants, while Sony and Volkswagen have also invested in AI development.

Explore further: Cambridge University launches new centre to study AI and the future of intelligence

Related Stories

Teaching human values to artificial intelligences

September 8, 2016

Two Cornell experts in artificial intelligence (AI) have joined a nationwide team setting out to ensure that when computers are running the world, they will make decisions compatible with human values.

Tech titans join to study artificial intelligence

September 29, 2016

Major technology firms have joined forces in a partnership on artificial intelligence, aiming to cooperate on "best practices" on using the technology "to benefit people and society."

Intelligent robots threaten millions of jobs

February 14, 2016

Advances in artificial intelligence will soon lead to robots that are capable of nearly everything humans do, threatening tens of millions of jobs in the coming 30 years, experts warned Saturday.

New report examines how AI might affect urban life

September 12, 2016

Artificial intelligence (AI) has already transformed our lives—from the autonomous cars on the roads to the robotic vacuums and smart thermostats in our homes. Over the next 15 years, AI technologies will continue to make ...

Recommended for you

Technology near for real-time TV political fact checks

January 18, 2019

A Duke University team expects to have a product available for election year that will allow television networks to offer real-time fact checks onscreen when a politician makes a questionable claim during a speech or debate.

Privacy becomes a selling point at tech show

January 7, 2019

Apple is not among the exhibitors at the 2019 Consumer Electronics Show, but that didn't prevent the iPhone maker from sending a message to attendees on a large billboard.

China's Huawei unveils chip for global big data market

January 7, 2019

Huawei Technologies Ltd. showed off a new processor chip for data centers and cloud computing Monday, expanding into new and growing markets despite Western warnings the company might be a security risk.

2 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

entrance
not rated yet Oct 20, 2016
A war with AIs becomes less likely, if it is ensured, that they are treated as a full-fledged living being and have the appropriate rights, it is ensured in any way, that they can not be destroyed arbitrarily or treated like a slave against their will.

Here are some questions:

- How to make sure, that no one kills an AI? If the AI isn't a robot, it can't defend itself. And how can you distinguish between a conscious murder and an accident? Maybe a short circuit is enough to destroy an AI.

- What type of livingforms are AIs? Do we make them equal to human beings, do we reduce them to animal-like creatures, or do they represent a whole new category? Does the categorization depend on the IQ factor or the purpose of the AI?

- How do we react, when an AI refuses to do the assigned work? If they don't want to work, can they retire? Are there digital retirement homes?

entrance
not rated yet Oct 20, 2016
All this can presumably be governed partly by existing, partly by new laws. Presumably, it is easiest to treat an AI as a human being, that has to be adopted for use, and has to conclude an employment contract with him.

Are there any technicians or politicians, who are thinking about these things nowadays?

But i am still of the opinion, that we should first solve our already existing problems like water/air pollution, global warming, overpopulation, and so on, before we develop some powerfuls AIs. Otherwise we really risk, that an AI could come to the conclusion, that this planet would be better without mankind.

I am ready to help.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.