Alphabet's DeepMind forms ethics unit for artificial intelligence

October 4, 2017
Deepmind is setting up an ethic unit to allay growing fears that artificial intelligence could slip out of human control
Deepmind is setting up an ethic unit to allay growing fears that artificial intelligence could slip out of human control

DeepMind, the Google sibling focusing on artificial intelligence, has announced the launch of an "ethics and society" unit to study the impact of new technologies on society.

The announcement by the London-based group acquired by Google parent Alphabet is the latest effort in the tech sector to ease concerns that robotics and artificial intelligence will veer out of human control.

"As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work," said a blog post announcing the launch Tuesday by DeepMind's Verity Harding and Sean Legassick.

"At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes. Understanding what this means in practice requires rigorous scientific inquiry into the most sensitive challenges we face."

The post said the focus would be on ensuring "truly beneficial and responsible" uses for .

"If AI technologies are to serve , they must be shaped by society's priorities and concerns," they wrote.

Google and DeepMind are members of an industry-founded Partnership of AI to Benefit People and Society which includes Facebook, Amazon, Microsoft and other tech firms.

DeepMind, acquired by Google in 2014, gained notoriety for becoming the first machine to beat a grandmaster in the Asian board game Go last year.

Explore further: Google buys artificial intelligence firm DeepMind

Related Stories

Tech titans join to study artificial intelligence

September 29, 2016

Major technology firms have joined forces in a partnership on artificial intelligence, aiming to cooperate on "best practices" on using the technology "to benefit people and society."

Apple joins group devoted to keeping AI nice

January 27, 2017

A technology industry alliance devoted to making sure smart machines don't turn against humanity said Friday that Apple has signed on and will have a seat on the board.

Recommended for you

Security gaps identified in internet protocol IPsec

August 15, 2018

In collaboration with colleagues from Opole University in Poland, researchers at Horst Görtz Institute for IT Security (HGI) at Ruhr-Universität Bochum (RUB) have demonstrated that the internet protocol IPsec is vulnerable ...

Researchers find flaw in WhatsApp

August 8, 2018

Researchers at Israeli cybersecurity firm said Wednesday they had found a flaw in WhatsApp that could allow hackers to modify and send fake messages in the popular social messaging app.

21 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Caliban
3 / 5 (2) Oct 04, 2017
Should read "Alphabet's DeepMind forms unit to find ways to distort ethics in order to justify artificial intelligence."

There is only one motive for Alphabet's drive to be the first to bring AI to market: massive profitability. "The ends justifies the means" justifies ignoring consequences.
thisisminesothere
5 / 5 (2) Oct 04, 2017
Should read "Alphabet's DeepMind forms unit to find ways to distort ethics in order to justify artificial intelligence."

There is only one motive for Alphabet's drive to be the first to bring AI to market: massive profitability. "The ends justifies the means" justifies ignoring consequences.


Are you saying that we, as a society, should not be pursuing AI at all? Or just that google shouldnt? If not them, then who?

AI is inevitable. Its just a matter of who gets there first. Along the way, as with all new tech, there are going to be stumbling blocks and things that go wrong. Putting together panels and people who know these fields into groups so they can discuss and deal with potential future events is the ONLY way we can mitigate things going completely wrong. Not sure why anyone would view this as a bad thing.
TheGhostofOtto1923
3.5 / 5 (2) Oct 04, 2017
AI is intrinsically more 'ethical' than humans because it isnt motivated by the desire to survive to reproduce.
hyongx
5 / 5 (1) Oct 04, 2017
If we have learned anything about business, it's Profit > Ethics
dan42day
5 / 5 (1) Oct 04, 2017
"AI is intrinsically more 'ethical' than humans because it isnt motivated by the desire to survive to reproduce"

Yet.

TheGhostofOtto1923
1 / 5 (1) Oct 05, 2017
"AI is intrinsically more 'ethical' than humans because it isnt motivated by the desire to survive to reproduce"

Yet.
Remember the doctor from star trek voyager?
https://www.youtu...n-nEoe1A

-He didnt care whether he was turned off or not.
thisisminesothere
not rated yet Oct 05, 2017
"AI is intrinsically more 'ethical' than humans because it isnt motivated by the desire to survive to reproduce"

Yet.
Remember the doctor from star trek voyager?
https://www.youtu...n-nEoe1A

-He didnt care whether he was turned off or not.


lol so your argument that AI wont have a survival instinct is the doctor from a SCI-FI show? Im not sure I would hang my hat on that one :P
drrobodog
not rated yet Oct 06, 2017
At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes.

If AI technologies are to serve society, they must be shaped by society's priorities and concerns

At what point does it become unethical to control or shape an AI?
If it were to lack a desire for survival would it be alright to turn it off?
If it were to have or developed a survival instinct would it be wrong to reshape it away?
TheGhostofOtto1923
not rated yet Oct 06, 2017
lol so your argument that AI wont have a survival instinct is the doctor from a SCI-FI show? Im not sure I would hang my hat on that one :P
Lol your argument is that sci-fi writers are not as savvy as you about science? I always liked that depiction of a machines indifference to the thing that humans fear most.

What makes you think that AI would resist termination unless it was programmed to do so?
If it were to lack a desire for survival would it be alright to turn it off?
Of course. Its a machine.
434a
not rated yet Oct 06, 2017
Of course. Its a machine.


So are you. A complex biological machine maybe but a machine nonetheless. If I created a complex biological substrate as a host for a conscious machine and placed that in a biological ambulatory unit would that deserve protection from arbitrary termination?

Your existence is only protected by the fact that other machines don't want to set a precedent that it's ok just to switch off a machine you don't like/want/need any longer.

What makes you special is all the other machines of the same type giving themselves a privileged status.
The groundwork in law for that privilege is based primarily on historical, religious precedent but if we were to start again then the fallacy we would create would be that we are self aware and therefore deserve protection and that our society could not function with unrestricted termination. In truth it would be because we have a personal fear of a future death at the hands of another machine. [Cont]
434a
not rated yet Oct 06, 2017
If that is the only sentiment that separates human consciousness from a future machine consciousness then I think we have a problem with our ethical framework. Unless of course you start believing in souls ;)
TheGhostofOtto1923
5 / 5 (1) Oct 06, 2017
So are you. A complex biological machine maybe but a machine nonetheless
I'm a machine which has an innate desire to survive to reproduce. I also have an affinity for tribal membership, and this includes the emulation of the tribal dynamic, that being internal altruism in conjunction with external animosity, because throughout human development tribal living was the best way of ensuring my survival to reproduce.

I also need to eat and sleep, and I grow old, get sick, and die.

These conditions all inform my behavior. The machines we create have none of these aspects. Why would we create machines that would?

We create machines to compensate for these sorts of limitations. That has always been the purpose of technology.
TheGhostofOtto1923
not rated yet Oct 06, 2017
hat separates human consciousness from a future machine consciousness
There is no such thing as consciousness. It's a replacement for the soul in a secular world.

Its the attempt by academies to capitalize on our innate desire to survive to reproduce. Death is an unfair restriction on this desire.
Whydening Gyre
not rated yet Oct 06, 2017
AI is intrinsically more 'ethical' than humans because it isnt motivated by the desire to survive to reproduce.

Unless that is initially introduced (programmed in)...
Or some motivation like it...
TheGhostofOtto1923
not rated yet Oct 06, 2017
AI is intrinsically more 'ethical' than humans because it isnt motivated by the desire to survive to reproduce.

Unless that is initially introduced (programmed in)...
Or some motivation like it...
So why on earth would anyone want to do that wg? Machines don't have any need for wombs.
Whydening Gyre
not rated yet Oct 06, 2017
So why on earth would anyone want to do that wg? Machines don't have any need for wombs.

Not sure, Otto....
But we humans are a funny bunch...:-)
You, yourself have intimated the instinct for survival is the strongest of all. Perhaps (at the time) our DNA progenitors determined it to be the best option to instill a sense of survival is through procreation... It provides a sense of inclusion... ownership.. in the process....
Whydening Gyre
not rated yet Oct 06, 2017
So why on earth would anyone want to do that wg? Machines don't have any need for wombs.

Anyway... wouldn't a "manufacturing plant" be considered a "womb"?
thisisminesothere
5 / 5 (1) Oct 06, 2017
So why on earth would anyone want to do that wg? Machines don't have any need for wombs.


Because we are human. Because there is always going to be someone who thinks it will be an interesting/good idea.

The question is, though, if a machine can demonstrate to the same ability as a human its autonomy and desire to survive, who are we to end that "life"?
Would it not be prudent to think ahead to scenarios like this and consider the implications?

What makes a life, be it mechanical or biological, any more valuable than another if it is self aware?

If aliens come for a visit, would you view them as you do AI? what if the aliens WERE AI?

Im just not sure how some people are so certain about these things. Its uncharted territory that requires deep thought.
TheGhostofOtto1923
not rated yet Oct 07, 2017
Not sure, Otto....
But we humans are a funny bunch...:-)
You saw Saturn 3? A robot with the hots for farrah fawcett?
https://www.youtu...xIu02bvg
You, yourself have intimated the instinct for survival is the strongest of all
Survival to reproduce. Theyre inseparable.
Because we are human. Because there is always going to be someone who thinks it will be an interesting/good idea
-And an AI burdened with such human motivations will not be able to compete against one without. A singularity will indeed emerge and it will be the result of competition. 'Natural selection'.
drrobodog
not rated yet Oct 09, 2017
The machines we create have none of these aspects. Why would we create machines that would?

Since we have no idea how to create an AI at this time, it is highly unlikely the first one created will have the exact characteristics desired. Isn't the current approach to combine algorithms, neural nets, and machine learning; then see what one ends up with? If the AI had a survival instinct would it be fine to turn it off? Or maybe to analyse and remove the trait?

And an AI burdened with such human motivations will not be able to compete against one without.

How so? An AI with no survival directive may very well turn itself off. What is it competing for?
antialias_physorg
not rated yet Oct 09, 2017
"At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes."

Russia has pretty much stated that whoever wins the AI wars will rule the world...so this confinement to 'socially beneficial purposes' ain't gonna happen.

The question is, though, if a machine can demonstrate to the same ability as a human its autonomy and desire to survive, who are we to end that "life"?

Since we have no qualms about doing this to animals - why would you think we'd have any qualms about doing the same to machine intelligences. (Note: I agree with you that we should respect an AIs will to live at some point - I just don't see this happening in reality for quite some time. Certainly not until the AI have their own 'big stick')

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.