Morality for robots?

Aug 29, 2012
This is the book cover of "The Machine Question: Critical Perspectives on AI, Robots, and Ethics." Credit: The MIT Press

On the topic of computers, artificial intelligence and robots, Northern Illinois University Professor David Gunkel says science fiction is fast becoming "science fact."

Fictional depictions of have run the gamut from the loyal Robot in "Lost in Space" to the killer computer HAL in "2001: A Space Odyssey" and the endearing C-3PO and R2-D2 of "Star Wars" fame.

While those robotic personifications are still the stuff of fiction, the issues they raised have never been more relevant than today, says Gunkel, a professor of communication technology.

In his new book, "The Machine Question: Critical Perspectives on AI, Robots, and Ethics" (The MIT Press), Gunkel ratchets up the debate over whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral treatment.

"A lot of the innovation in thinking about machines and their moral consideration has been done in science fiction, and this book calls upon fiction to show us how we've confronted the problem," Gunkel says. "In fact, the first piece of writing to use the term 'robot' was a 1920s play called 'R.U.R.,' which included a meditation on our responsibilities to these machines."

Ethics is typically understood as being concerned with questions of responsibility for and in the face of an "other," presumably another person.

But Gunkel, who holds a Ph.D. in philosophy, notes that this cornerstone of modern ethical thought has been significantly challenged, most visibly by animal rights activists but also increasingly by those at the cutting edge of technology.

"If we admit the animal should have moral consideration, we need to think seriously about the machine," Gunkel says. "It is really the next step in terms of looking at the non-human other."

The NIU professor points out that real decision-making machines are now ensconced in business, personal lives and even national defense. Machines are trading stocks, deciding whether you're credit worthy and conducting clandestine Drone missions overseas.

"Online interactions with machines provide an even more pervasive example," Gunkel adds. "It's getting more difficult to distinguish whether we're talking to a human or to a machine. In fact, the majority of activity on the Internet is machine traffic—that is, machine to machine. Machines have taken over; it has happened."

Some machines even have the ability to innovate or become smarter, raising questions over who is responsible for their actions. "It could be viewed as if the programmer who writes the original program is like a parent who no longer is responsible for the machine's decisions and innovations," Gunkel says.

Some governments are beginning to address the ethical dilemmas. South Korea, for instance, created a code of ethics to prevent human abuse of robots—and vice versa. Meanwhile, Japan's Ministry of Economy, Trade and Industry is purportedly working on a code of behavior for robots, especially those employed in the elder-care industry.

Ethical dilemmas are even cropping up in sports, Gunkel says, noting recent questions surrounding human augmentation. He points to the case of South African sprinter and double amputee Oscar Pistorius, nicknamed "blade runner" because he runs on two prosthetic legs made of carbon-fiber.

In 2008, Pistorius was restricted from competing in the Beijing Olympics because there was concern that he had an unfair advantage. This decision was successfully challenged, and Pistorius competed in the 2012 London Games.

Similar concerns about the fairness of human augmentation can be seen in the recent crisis "concerning pharmacological prosthetics, or steroids, in professional baseball," Gunkel says. "This is, I would argue, one version of the machine question."

But Gunkel says he was inspired to write "The Machine Question" because engineers and scientists are increasingly bumping up against important ethical questions related to machines.

"Engineers are smart people but are not necessarily trained in ethics," Gunkel says. "In a way, this book aims to connect the dots across the disciplinary divide, to get the scientists and engineers talking to the humanists, who bring 2,500 years of ethical thinking to bear on these problems posed by new technology.

"The real danger," Gunkel adds, "is if we don't have these conversations."

In "The Machine Question," Gunkel frames the debate, which in recent years has ramped up in academia, where conferences, symposia and workshops carry provocative titles such as "AI, Ethics, and (Quasi) Human Rights."

"I wanted to follow all the threads, provide an overview and make sure we're asking the right questions," Gunkel says.

He concludes in his new book that the moral community indeed has been far too restrictive.

"Historically, we have excluded many entities from moral consideration and these exclusions have had devastating effects for others," Gunkel says. "Just as the animal has been successfully extended moral consideration in the second-half of the 20th century, I conclude that we will, in the 21st century, need to consider doing something similar for the intelligent machines and robots that are increasingly part of our world."

"The Machine Question" is available for purchase through The MIT Press, amazon.com and numerous other book sellers. Gunkel is author of two other books, "Hacking Cyberspace" and "Thinking Otherwise: Philosophy, Communication, Technology."

Explore further: Researchers use Twitter to predict crime

add to favorites email to friend print save as pdf

Related Stories

How to engineer intelligence

Mar 20, 2012

"Do we actually want machines to interact with humans in an emotional way? Will it be possible for them to interact with us?"

How to make ethical robots

Mar 12, 2012

(PhysOrg.com) -- In the future according to robotics researchers, robots will likely fight our wars, care for our elderly, babysit our children, and serve and entertain us in a wide variety of situations. ...

Scientists, lawyers mull effects of home robots

Dec 05, 2009

(AP) -- Eric Horvitz illustrates the potential dilemmas of living with robots by telling the story of how he once got stuck in an elevator at Stanford Hospital with a droid the size of a washing machine.

Futuristic robots, friend or foe?

Apr 22, 2008

A leading robotics expert will outline some of the ethical pitfalls of near-future robots to a Parliamentary group today at the House of Commons. Professor Noel Sharkey from the University of Sheffield will explain that robots ...

Recommended for you

User comments : 15

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
4.5 / 5 (4) Aug 29, 2012
"It could be viewed as if the programmer who writes the original program is like a parent who no longer is responsible for the machine's decisions and innovations," Gunkel says.


Perhaps, if the machine has the ability to transcend its programming. Otherwise it's still just doing what it is told to do, even if the programmer didn't quite understand what he told it to do.

The paradox is, that even a program that has been programmed to reprogram itself still follows pre-programmed rules into how it should reprogram itself. Therefore nothing it does is really its own doing or reasoning; everything stems from the original programming and the inputs it has recieved thus far.

When speaking of programmers, the difference is just semantical: the person who gives the robot instructions and input is just programming it in a different way. All the robot does is still what it has been told to do, so it cannot have any more responsibilities or rights than a pair of scissors.
Eikka
not rated yet Aug 29, 2012
There's just one hypothetical situation that is unclear.

Suppose you drop the robot off at a street corner and leave it to observe the surroundings completely unattended and unguided. Who then is responsible for what actions the robot develops?
Scryer
not rated yet Aug 29, 2012
The only way would be to create a program that can transcend its original programming organically, the same way humans do. Nature Vs. Nurture, as it were - however this type of algorithmic research is still in its infancy.
kochevnik
not rated yet Aug 29, 2012
Robots will only be able to grasp insect politics in our lifetime.
Eikka
5 / 5 (1) Aug 30, 2012
the same way humans do. Nature Vs. Nurture, as it were


Those are just two different programmers to a machine that doesn't have its own will. It has no choice over what to do with the input it recieves.

Also, since we're talking about a computer program, there's also the problem that the machine will lack understanding of the information it processes. It manipulates symbols and categories according to rules, but none of those mean anything to the machine itself.
antialias_physorg
5 / 5 (1) Aug 30, 2012
I do find that it will be difficult to have machine ethics (i.e. ethics towards a machine and ethics of machines towards us)

Ethical considerations among humans (and to some extent those towards animals) are based in a common context:
1) inseparability of the mind and the body
2) capability to feel emotions
3) inability to exchange sets of emotions via an external agent (i.e if you are made to experience a bad emotion by another's action you are stuck with that emotion)
4) the concept of (permanent) 'bodily harm'.

None of these really apply to machines (except possibly point 3. Which may be feasible though appropriate design). But all of these are prerequisites for ethics/morals to apply.
antialias_physorg
5 / 5 (2) Aug 30, 2012
Perhaps, if the machine has the ability to transcend its programming.

Neural nets aren't science fiction. and they aren't programmed to do anything particular. You train them just like you train the neural net in the brain of a child. So yes. I would argue that a neural net that has been trained has already transcended its programming.

It doesn't transcend it's programming capabilities (i.e. a neural net cannot think what the underlying neural net architecture is incapable of thinking) - but that same argument applies to the human brain.

The paradox is, that even a program that has been programmed to reprogram itself still follows pre-programmed rules into how it should reprogram itself.

So does the human brain. Whether you use laws set as 1s and 0s within a limited context of a chip or just the laws of physics and chemistry within the limited context of the universe is just a quantitative difference - not a qualitative one.
antialias_physorg
5 / 5 (2) Aug 30, 2012
It has no choice over what to do with the input it recieves.

Neither do you (no, I'm not arguing for full determinism here - there is a third alternative. See below).
Your brain also just works according to physical laws. Some of these laws include a random element (quantum mechanics). But on the whole the state of the brain at one point is strongly correlated to the state it will be in next. There is no 'free will' mechanisms floating above the brain that can decide: "No - I will not go to the next state but to one that is COMPLETELY other"

We have free will in the sense that it's not fully predetermined. Not in the sense that we can choose our state independent of the activity in the brain and from our senses.

If you add a (truly) random element to the mechanisms in a neural net program you have something very similar: Not predeterminabel (i.e.'free') but not fuly random, either.
Deathclock
not rated yet Aug 30, 2012
It has no choice over what to do with the input it recieves.

Neither do you (no, I'm not arguing for full determinism here - there is a third alternative. See below).
Your brain also just works according to physical laws. Some of these laws include a random element (quantum mechanics). But on the whole the state of the brain at one point is strongly correlated to the state it will be in next. There is no 'free will' mechanisms floating above the brain that can decide: "No - I will not go to the next state but to one that is COMPLETELY other"

We have free will in the sense that it's not fully predetermined. Not in the sense that we can choose our state independent of the activity in the brain and from our senses.

If you add a (truly) random element to the mechanisms in a neural net program you have something very similar: Not predeterminabel (i.e.'free') but not fuly random, either.


A 5 rating wasn't enough, I had to commend this post in person!
Deathclock
5 / 5 (1) Aug 30, 2012
The only way would be to create a program that can transcend its original programming organically, the same way humans do.


As stated, we already do this. I haven't worked with AI much but even I have worked on machine learning algorithms in school. I wrote a very simple program that could be said to have "transcended" it's original programming, in that it was capable of learning things itself that were not explicitly programmed into it.

Just look up machine learning, we've actually been doing this for a while.
DarkHorse66
not rated yet Aug 31, 2012
I do find that it will be difficult to have machine ethics (i.e. ethics towards a machine and ethics of machines towards us)

Ethical considerations among humans (and to some extent those towards animals) are based in a common context:
1) inseparability of the mind and the body
2) capability to feel emotions
3) inability to exchange sets of emotions via an external agent (i.e if you are made to experience a bad emotion by another's action you are stuck with that emotion)
4) the concept of (permanent) 'bodily harm'.

None of these really apply to machines (except possibly point 3. Which may be feasible though appropriate design). But all of these are prerequisites for ethics/morals to apply.

Perhaps Asimov's Three Laws of Robotics (&the zeroth law) might be a more suitable starting point:
http://en.wikiped...Robotics
Best Regards, DH66
antialias_physorg
not rated yet Aug 31, 2012
I'm not too thrilled about Asimov's laws as templates for machine ethics. They are merely a story device to get his short stories (and the Foundation series at the end) rolling.

Basically all of Asimov's robot stories deal with why these laws don't work. I.e. robots going haywire despite - or precisely because of - those three laws.
DarkHorse66
not rated yet Aug 31, 2012
Truth is, there is no guarantee - even for 'human laws' Despite the ideal of a common set of ethics, people will always develop their own, individual interpretations of ethics and morality, even when the actual rules might be identical (in a particular subset, situation etc.) and these interpretations can be polar opposites. The business of 'understanding' or interpreting is as individual for one set of entities (human, robots, animals(?) that last will depend on YOUR interpretation of morality or ethics) as it is for another. So in that sense, the same uncertainties apply no matter whether it is about AI's or 'evolved organics'. So perhaps the article is missing a vital point by making the assumption that, when it comes to AI's, a universal set of laws of any kind is even possible. Already by virtue of different needs of use, there will need to be all kinds of different programmings and each of these will have its own particular weaknesses as well as strengths. Best Regards, DH66
DarkHorse66
5 / 5 (1) Aug 31, 2012
Basically all of Asimov's robot stories deal with why these laws don't work. I.e. robots going haywire despite - or precisely because of - those three laws.

As an exercise in logic ,I guess, to make my point, I could rewrite the above as: "Basically all of Asimov's stories deal with why these laws (codes of ethics/morality) don't work. I.e. people going haywire despite - or precisely because of - those moral laws." Leaving the fact of AI's being the central characters of his stories aside for this purpose, this statement will be just as true as the original. In many ways his stories could just as easily be explorations of the diversity of possible human behaviours and what they 'do' with ethics and moral codes... Cheers, DH66
antialias_physorg
5 / 5 (3) Aug 31, 2012
When we talk about AI we're also not talking about a preprogrammed set of instructions but about a learned/trained type of behavior which is simply based on a preprogrammed framework (much like our values/attitudes are imprinted on the brain through experience).
So I don't think a preprogramed set of morals would even work.

We'll have to teach AI ethics just like we have to teach children.

More news stories

Making graphene in your kitchen

Graphene has been touted as a wonder material—the world's thinnest substance, but super-strong. Now scientists say it is so easy to make you could produce some in your kitchen.

Less-schooled whites lose longevity, study finds

Barbara Gentry slowly shifts her heavy frame out of a chair and uses a walker to move the dozen feet to a chair not far from the pool table at the Buford Senior Center. Her hair is white and a cough sometimes interrupts her ...

Easter morning delivery for space station

Space station astronauts got a special Easter treat: a cargo ship full of supplies. The shipment arrived Sunday morning via the SpaceX company's Dragon cargo capsule.