DeepMind boss admits 'risks' of AI

March 10, 2018
Credit: CC0 Public Domain

Artificial intelligence offers huge scientific benefits but also brings risks depending on how it is used, Demis Hassabis, the head of leading British AI firm DeepMind, said Friday.

"There's a whole bunch of interesting and difficult philosophical questions... that we're going to have to answer about how to control these systems, what values we want in them, how do we want to deploy them, what do we want to use them for," he said.

Hassabis was speaking at a screening of a documentary about AlphaGo, the AI system developed by DeepMind that stunned the world in 2016 by beating an elite human player in the complex Chinese strategy game "Go".

In a question and answer session at University College London, he said AI is an "incredible tool to accelerate ", adding: "We believe that it will be one of the most beneficial technologies of mankind ever."

However, like other powerful technologies, "there are risks", he said, adding: "It depends on how we as a society decide to deploy it that will resolve in good or bad outcomes."

He said were at the "forefront of our mind" at DeepMind, which he founded in 2010 and is now part of Google.

Explore further: Alphabet's DeepMind forms ethics unit for artificial intelligence

Related Stories

AI wins as Google algorithm beats No. 1 Go player (Update)

May 23, 2017

Google's computer algorithm AlphaGo narrowly beat the world's top-ranked player in the ancient Chinese board game of Go on Tuesday, reaffirming the arrival of what its developers tout as a ground-breaking new form of artificial ...

Recommended for you

Google braces for huge EU fine over Android

July 18, 2018

Google prepared Wednesday to be hit with huge EU fine for freezing out rivals of its Android mobile phone system in a ruling that could spark new tensions between Brussels and Washington.

EU set to fine Google billions over Android: sources

July 17, 2018

The EU is set to fine US internet giant Google several billion euros this week for freezing out rivals of its Android mobile phone system, sources said, in a ruling that risks fresh tensions with Washington.

27 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
1 / 5 (8) Mar 10, 2018
The foremost risk of AI is that it simply doesn't work, and the proponents are simply fooling themselves and/or others.

Like Watson for Oncology, which turned out to be a mechanical turk and not AI at all. Of course this was already known by the fact that Watson is nothing more than an elaborate search engine and depends entirely on what data you put in - it doesn't reason or generate any novel insights - but it was nevertheless sold by IBM as an AI for the treatment of cancer, and IBM is continuing to deny its flaws.

https://scienceba...reality/

Eikka
2.1 / 5 (7) Mar 10, 2018
He said ethical questions were at the "forefront of our mind" at DeepMind, which he founded in 2010 and is now part of Google.


Yeah, like the privacy issues of sending the data of 1.6 million NHS patients overseas to Google as DeepMind is used for screening diseases in the UK.

No ulterior motives there.
andyarok
5 / 5 (8) Mar 10, 2018
The foremost risk of AI is that it simply doesn't work, and the proponents are simply fooling themselves and/or others.


So, the photos/images that are automatically grouped and animated, the drones that self pilot,
alpha go's win in 'go' game, self driving cars and so much more are not artificial intelligence? Do you think they work by manual programming and not by learning?
Do we humans not make mistakes when we learn? Like any budding technology it will take time to fully mature.
rrwillsj
1 / 5 (4) Mar 10, 2018
The Conundrum of Progress. A stochastic process conflating the Laws of Thermodynamics with Human fallibility.

"No situation is so bad... That we can fail to make it worse!"

The very definition of Human egotism.
greenonions1
5 / 5 (5) Mar 10, 2018
Watson is nothing more than an elaborate search engine and depends entirely on what data you put in
Doesn't that apply to humans too?

I think Watson is intelligent - just not as intelligent as humans - although in some areas can of course surpass humans (jeopardy).

This is a great documentary on developing Watson - https://www.youtu...cCJJ6ciw
Bongstar420
4.5 / 5 (2) Mar 10, 2018
There is no benefit as long as the current rich maintain their relative positions unless AI puts them in their proper lace. The relative distribution of power/wealth is incredibly perverted compared to the distribution of objective aptitude among the population.
Ralph
not rated yet Mar 11, 2018
Demis Hassabis claims the future depends on "How we as a society decide to deploy" AI. But that assumes we are able to decide anything at all about the uses of AI (but we are not) and that we could enforce our decisions if we made any (but we could not).
Eikka
1 / 5 (3) Mar 11, 2018
Doesn't that apply to humans too?


No. Not really. It's obvious that we come up with new ideas, whereas the Watson search engine simply regurgitates expert opinions that have been put in by a small number of individuals from a particular hospital.

So, the photos/images that are automatically grouped and animated, the drones that self pilot, alpha go's win in 'go' game, self driving cars and so much more are not artificial intelligence?


No, they're more properly called "expert systems". For example, sorting pictures by statistical correlations rather than by understanding what's in the picture. Calling them AI is simply marketing.

For exampe, drones and cars don't "self-pilot" as they have no understanding where they're going. They just follow lines set by -people- who are the actual brains behind the operation.
Eikka
1 / 5 (3) Mar 11, 2018
alpha go's win in 'go' game


These expert systems are also based on massive search capabilities rather than coming up with new information through intelligence. The AlphaGo algorithm for example is performing the equivalent of 800 human lifetimes of practice games to find a number of strategies that no human player has yet found, allowing it to win.

This method works with board games where the system is exceedingly simple, and the variations to the problem are multitude. You can always brute-force your way through the problem beacuse the small size of it allows you to permutate really fast.

Now the question is, does this sort of artifical tediousness extrapolate to intelligence, or does it break down entirely when the problem is larger and more complex than black and white pebbles on a board.
Eikka
1 / 5 (4) Mar 11, 2018
From an engineering perspective, a self-driving car is just an elaborate PID controller, just barely smarter than your room thermostat.

It gets given a 3D map recorded and cleaned up by people, which it uses to locate itself in the environment by simple subtraction and best fit algorithm. This produces a simple coordinate number.

Then it gets given a virtual "rope" to follow, and it subtracts its currently estimated position from the position it's supposed to be at, and comes up with a course correction - like a thermostat that finds the temperature above or below its set point and turns the heater on or off. It steers towards the virtual rope, and if it overshoots it steers back.

Add a simple obstacle detection and some pathfinding algorithms to get around obstacles in the immediate path, and that's it. That's how you get a Google Car.

That works so well - most of the time - that you can drive it around and parade it to the press as an Artifically Intelligent car.
ShotmanMaslo
5 / 5 (3) Mar 11, 2018
That is not how it works. Self-driving cars are using trained artificial neural networks. It is like a tiny virtual piece of a brain. Much less complex than a human brain, but the basic concept is very similar.

Risks of AI are overblown currently, because we are still far from a general intelligence that can rival that of humans. But it is only a matter of time until it happens, and then all bets are off.
Eikka
1 / 5 (2) Mar 11, 2018
Do you think they work by manual programming and not by learning?


I KNOW most of these systems work by manual programming that is simply branded as machine intelligence, and where learning algorithms are used they're not smart in the sense we'd think of as intelligence but simply randomizing your way through a gargantuan set of permutations and picking the best solutions out of that.

This version of intelligence is merely a question or butting your head to the wall until you find the door.
Eikka
1 / 5 (3) Mar 11, 2018
That is not how it works.


No, that's exactly how they work. Go ask Google.

Self-driving cars are using trained artificial neural networks. It is like a tiny virtual piece of a brain. Much less complex than a human brain, but the basic concept is very similar.


The actual driving of the car is still based on measuring your accurate position on the road and comparing it mathematically to where the car is supposed to be. A well-tuned (by people) deterministic program algorithm is responsible for turning the wheels.

The neural networks are employed for the object detection and collision avoidance, to speed up this sensory filtering beyond the traditional statistical methods. The neural networks don't drive the car.
Eikka
1 / 5 (1) Mar 11, 2018
One example of how the marketing and the reality surrounding AI is a pit of deception is the hoopla about Fuzzy Logic back in the day

It was sold as going beyond the stupidity of binary logic, so that machines could "understand" human concepts like "a little bit warm", instead of just putting the definition of "warm" at a certain precise temperature.

But what fuzzy logic really does is just the same. If you want it to behave deterministically so your heater doesn't turn on spuriously or your washing machine doesn't randomly measure the wrong amount of detergent, it collapses back down to the original binary logic.

Likewise, the hoopla about neural networks today is similiar. A neural network is trained on a data set, then frozen in place so it wouldn't forget what it has learned, and that reduces the network to a regular computer program. Only difference is, nobody knows how it works.
Eikka
1 / 5 (1) Mar 11, 2018
So what then is the difference between "machine learning" and programming?

Because, as you are training the network, you're simply doing indirect programming. You say "yes" or "no" depending on whether the network does what you want it to, and add random modifications until it does.

When the program is thus completed, you upload it to your self-driving car where it is no longer learning anything new. It's no longer acting as a neural network, because that would be too dangerous. After all, without you constantly telling the network "yes" and "no", it wouldn't know what to do. It could start doing just about anything, or do nothing.

A self-learning program has to be programmed to learn, which means you need to program your own self into the program to give this feedback and keep it within limits, which means the program is simply emulating how -you- would behave in any situation.

Calling this "AI" is simply a smokescreen. It's your intelligence at work there.
TheGhostofOtto1923
5 / 5 (1) Mar 11, 2018
No, that's exactly how they work. Go ask Google
- At the moment that is. Soon drones and traffic cams will be feeding cars data and updating maps in realtime. Cars will be feeding each other in virtual networks, constantly learning and improving.
merely a question or butting your head to the wall until you find the door
- But AI can headbutt @ 1000s of times per sec. And unlike humans it learns, shares, and never forgets.
it collapses back down to the original binary logic
Complexity will pass a threshold where it will resemble human reasoning. But unlike us it will be flawless
Eikka
1 / 5 (1) Mar 11, 2018
Cars will be feeding each other in virtual networks, constantly learning and improving.


Perhaps, but see my point above about the self-learning programs that need to be programmed to learn.

But AI can headbutt @ 1000s of times per sec. And unlike humans it learns, shares, and never forgets.


The point is that this is not true for all cases. As the complexity of information grows, processing slows down and transmission becomes practically impossible.

And there are pitfalls. Consider for example, there was a bug in Tesla's self driving cars where the autopilot was acting erratically because a camera was physically installed slightly off from where it was expected to be. If one car learns to drive by where its cameras are, can another car make use of that information?

In other words, if I was given your eyes, would I see at all?
Eikka
1 / 5 (1) Mar 11, 2018
Complexity will pass a threshold where it will resemble human reasoning. But unlike us it will be flawless


There is a notion that a million monkeys with a million typewriters and enough time would eventually produce the works of Shakespeare.

That might be theoretically true, but practical experiments on the issue made back in 2003 revealed that monkeys mostly prefer the letter S, and six monkeys will produce about five sheets a month.

Point being that while the theory points to a flawless result, getting there is practically impossible, like moving Mt. Everest three feet towards east, with a spoon. A better, more intelligent, means have to be deviced if the task is ever to be completed.
tallenglish
not rated yet Mar 11, 2018
Simple way to stop AI from killing everyone, make sure it uses humans as its only input/output and in a way it then becomes the human hive mind - not a separate entity.

If we give it all our information and allow it to be complete autonomy from us, it will bypass us and see us as redundant and we have no value to it.
greenonions1
5 / 5 (1) Mar 11, 2018
Eikka
It's obvious that we come up with new ideas
And AI does not? Autonomous cars are very simple compared to the human mind. But if I am driving home one way - and the system says "if you go a different way - it will be faster," - that is a new idea. A very simple one - but none the less a new idea. Just because someone had to program the system - does not invalidate the process. Someone had to teach me to calculate time and distance too. It is no problem for me to think that AI will one day achieve self awareness. It will be exciting to watch and see if that prediction comes true. No idea if it will be in my life time.
rrwillsj
3 / 5 (2) Mar 11, 2018
As for "manual learning"? Try these recent articles:

https://phys.org
/news/2018-03-ai-dirty-secret-powered-people.html

https://techxplor...ies.html

https://www.theon...23080006

Repetitiousing myself. I am not concerned by what ever deviltry, the idle minds of AI get up too. What does concern me are the fallible humans involved in the process of constructing and coding the machines.

Cause, when it comes to deviltry? Tens of millions of years of programed instincts for destructive behavior by us naked apes? Will be what educates the AI's.

TheGhostofOtto1923
not rated yet Mar 11, 2018
processing slows down and transmission becomes practically impossible
Last year this time I was using under 2gb of data on my phone. Now I'm streaming tv, Netflix, showtime, at 50gb/month, faster than most WiFi.

Soon tv and movies will be VR. Soon we will be able to record our lives in realtime. Soon everything of value will be tagged and tracked. Vehicle networks will be mundane.

Musk is launching a constellation of 1000s of sats in low earth orbit, no lag time.

Breathtaking.
Consider for example, there was a bug in Tesla's self driving cars where the autopilot was acting erratically because a camera was physically installed slightly off from where it was expected to be
So they identified the problem and fixed it didn't they? Humans are in contrast unfixable and we keep having to create tech to compensate. AI vehicles are a logical progression.

AI everything is a logical progression.
Malacadabra
not rated yet Mar 12, 2018
I think you are all missing the point.
Neural networks are not key - the real threat comes from derivatives of Genetic Programming.
GM is able to improve it's ability to peform a task by iteration ('evolution of the algorithm').
In principle, if it is given the task of improving it's own ability to evolve, it could improve at an ever increasing rate - thus outstripping the intelligence of it's initial creators.
Search for AI Singularity.
antialias_physorg
not rated yet Mar 12, 2018
In principle, if it is given the task of improving it's own ability to evolve, it could improve at an ever increasing rate

I think you don't understand how genetic algorithms work. If anything they slow down the longer they run (and mostly get stuck in local minima to boot).

Evolution what AI or GAs can do doesn't magically 'speed up' just because of how these methods work. To make yourself smarter (via NN or GA alike) you have to have a target function. There is no such target function that evaluates for 'smarter' (at best you can come up with one for 'more efficient')

Search for AI Singularity.

The idea of a technological 'singularity' (of any kind) is just so much PR blurb. Most easily seen because of how they are so badly defined (no: 'machine intelligence' is not the same as 'biological intelligence').
Malacadabra
not rated yet Mar 12, 2018
I think that I do understand GAs, and the problem of local minima. I was not talking about the current state of development of GAs, but rather the principle.
With time (relatively soon) and with GAs given the task of improving themselves rather than achieving some other task, the AI singularity will occur.
The GA algorithm only needs to improve itself in some tiny way - it can them be stopped, discarded, and the new improved algorithm used, and so on.
To be clear, current GAs try to improve the 'DNA' (the sequence of symbols and values which make up each generation). I am not talking about this - I am talking about using a GA to improve the algorithm itself (the process used to evolve the 'DNA') - not just the 'DNA'.
rrwillsj
1 / 5 (1) Mar 12, 2018
Let's see now... The Old imperfect, perfecting those New imperfects until they can perfect themselves?

Oh, yeah, nothing can possibly go wrong with that scenario!

Hah! And, Balderdash!

At least with the repugnantly Self-Righteous you can see their penile-substitute weaponry. And realize the intentions of the crazy loon.

You have some sort of magical vision that you can accurately determine the intentions of that oncoming AI/robot/drone?

And if you are apathetic about the intentions of the programmer/pilot? Consider how you would syffer if the machine was programmed/piloted by me, Mister Wonderbluster?

Cause I feel myself channeling the Spirit of Soupy Sales. You will forever be afraid. Very afraid! In the presence of cream pies and seltzer bottles.

"Muahahaha!!"
Spaced out Engineer
not rated yet Mar 12, 2018
Is there superiority if there does not exist an algorithm for diophantine equations in four dimensions, but there does exist a geometry?
Why do we assume an AI would not have the intelligence to understand intelligence is an open question. Specialized systems already defeat humans, but generalized intelligence seems to necessitate adaptability. In other words dynamic pluralism should comprehend changing modes of "close enough" solutions, as a key criteria. We can optimize our ability to exchange tribes, but once we admit such complexity on the landscape we seem to have left the optimal, in favor of reorganizing "agency".
We are not playing to win. We are playing for action in and of itself, or sharing the lose in humility, but gratitude for the exchange.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.