How to make ethical robots

Mar 12, 2012 by Lisa Zyga feature
RI-MAN, a robot developed by researchers at RIKEN in Japan, was designed for human care. Image credit: RIKEN, Bio-Mimetic Control Research Center

(PhysOrg.com) -- In the future according to robotics researchers, robots will likely fight our wars, care for our elderly, babysit our children, and serve and entertain us in a wide variety of situations. But as robotic development continues to grow, one subfield of robotics research is lagging behind other areas: roboethics, or ensuring that robot behavior adheres to certain moral standards. In a new paper that provides a broad overview of ethical behavior in robots, researchers emphasize the importance of being proactive rather than reactive in this area.

The authors, Ronald Craig Arkin, Regents’ Professor and Director of the Mobile Laboratory at the Georgia Institute of Technology in Atlanta, Georgia, along with researchers Patrick Ulam and Alan R. Wagner, have published their overview of moral decision making in autonomous systems in a recent issue of the .

“Probably at the highest level, the most important message is that people need to start to think and talk about these issues, and some are more pressing than others,” Arkin told PhysOrg.com. “More folks are becoming aware, and the very young machine and robot ethics communities are beginning to grow. They are still in their infancy though, but a new generation of researchers should help provide additional momentum. Hopefully articles such as the one we wrote will help focus attention on that.”

The big question, according to the researchers, is how we can ensure that future robotic technology preserves our humanity and our societies’ values. They explain that, while there is no simple answer, a few techniques could be useful for enforcing ethical behavior in robots.

One method involves an “ethical governor,” a name inspired by the mechanical governor for the steam engine, which ensured that the powerful engines behaved safely and within predefined bounds of performance. Similarly, an ethical governor would ensure that robot behavior would stay within predefined ethical bounds. For example, for autonomous military robots, these bounds would include principles derived from the Geneva Conventions and other rules of engagement that humans use. Civilian robots would have different sets of bounds specific to their purposes.

Since it’s not enough just to know what’s forbidden, the researchers say that autonomous robots must also need emotions to motivate behavior modification. One of the most important emotions for robots to have would be guilt, which a robot would “feel” or produce whenever it violates its ethical constraints imposed by the governor, or when criticized by a human. Philosophers and psychologists consider guilt as a critical motivator of moral behavior, as it leads to behavior modifications based on the consequences of previous actions. The researchers here propose that, when a robot’s guilt value exceeds specified thresholds, the robot’s abilities may be temporarily restricted (for example, military robots might not have access to certain weapons).

Though it may seem surprising at first, the researchers suggest that robots should also have the ability to deceive people – for appropriate reasons and in appropriate ways – in order to be truly ethical. They note that, in the animal world, deception indicates social intelligence and can have benefits under the right circumstances. For instance, search-and-rescue robots may need to deceive in order to calm or gain cooperation from a panicking victim. Robots that care for Alzheimer’s patients may need to deceive in order to administer treatment. In such situations, the use of deception is morally warranted, although teaching robots to act deceitfully and appropriately will be challenging.

The final point that the researchers touch on in their overview is ensuring that robots – especially those that care for children and the elderly – respect human dignity, including human autonomy, privacy, identity, and other basic human rights. The researchers note that this issue has been largely overlooked in previous research on robot ethics, which mostly focuses on physical safety. Ensuring that robots respect human dignity will likely require interdisciplinary input.

The researchers predict that enforcing ethical behavior in robots will face challenges in many different areas.

“In some cases it's perception, such as discrimination of combatant or non-combatant in the battlespace,” Arkin said. “In other cases, ethical reasoning will require a deeper understanding of human moral reasoning processes, and the difficulty in many domains of defining just what ethical behavior is. There are also cross-cultural differences which need to be accounted for.”

An unexpected benefit from developing an ethical advisor for robots is that the advising might assist humans when facing ethically challenging decisions, as well. Computerized ethical advising already exists for law and bioethics, and similar computational machinery might also enhance ethical behavior in human-human relationships.

“Perhaps if robots could act as role models in situations where humans have difficulty acting in accord with moral standards, this could positively reinforce ethical behavior in people, but that's an unproven hypothesis,” Arkin said.

Explore further: Robots unlikely to take big bites out of employment, expert says

More information: Ronald Craig Arkin, et al. “Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception.” Proceedings of the IEEE. Vol. 100, No. 3, March 2012. DOI: 10.1109/JPROC2011.2173265

4 /5 (15 votes)

Related Stories

Futuristic robots, friend or foe?

Apr 22, 2008

A leading robotics expert will outline some of the ethical pitfalls of near-future robots to a Parliamentary group today at the House of Commons. Professor Noel Sharkey from the University of Sheffield will explain that robots ...

What will the next 50 years bring in robotics research?

Apr 24, 2007

Would a conscious robot need the same rights as a human being? Could robots one day take over the care of our ageing population? Will robots be our soldiers of the future? When will robots be able to do all the housework?

Thanks to RoboEarth the bots can learn on their own

Feb 10, 2011

(PhysOrg.com) -- We tend to assume that robots need human input in order to understand the world around them. In the near future humans may not even be a part of the robotic-learning equation. Soon, robots ...

Recommended for you

User comments : 37

Adjust slider to filter visible comments by rank

Display comments: newest first

danlgarmstrong
4 / 5 (4) Mar 12, 2012
I wonder if it would be 'ethical' to put this software on a chip to implant in a person's head to help them with their own decisions?
Yellowdart
2.7 / 5 (11) Mar 12, 2012
This is all assuming that Skynet fails to take over.
Kinedryl
2.4 / 5 (10) Mar 12, 2012
"..a robot may not injure a human being or, through inaction, allow a human being to come to harm.." .. does it apply to MS Windows?
patnclaire
5 / 5 (2) Mar 12, 2012
I think that humaniform robots should be built as sturdy and strong as possible. Human beings tend to batter wives and children and kick dogs when they do not get their way. Like the movie, AI, what's to prevent humans from mistreating humaniform robots like we mistreat chimps and great apes?
hyongx
not rated yet Mar 12, 2012
this article sounds like it's talking about how to raise ethical children.
ChaosRN
2.6 / 5 (5) Mar 12, 2012
if we get robots to fight our wars, then there is no cost, then there is only $$$$ or lack of raw materials, to pressure us to stop war at all....
Asimov's three laws will prevent robots from fighting, Law #3: "A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."
Deadbolt
2.3 / 5 (3) Mar 12, 2012
I think that humaniform robots should be built as sturdy and strong as possible. Human beings tend to batter wives and children and kick dogs when they do not get their way. Like the movie, AI, what's to prevent humans from mistreating humaniform robots like we mistreat chimps and great apes?


Given that robots have not gone through the evolution that we have, and could possess any emotions and any possible mind the field of mindspaces, we could always make it so that they ENJOY being beaten if we so perversely chose.

Once we understand what patterns in minds correspond to emotions, we could make it so that these patterns match up with non-evolutionarily fit behaviors, such as enjoying killing yourself. We could make it so that robots enjoy serving humans no matter the cost.

Jo01
1.5 / 5 (15) Mar 12, 2012
Interesting, so scientists will teach ethical behavior. That will be difficult. Apart from the fact that they have absolutely no clue what they are talking about, scientist are the least ethical people I know of.

J.
Xbw
1 / 5 (2) Mar 12, 2012
One of the most important emotions for robots to have would be guilt, which a robot would feel or produce whenever it violates its ethical constraints imposed by the governor, or when criticized by a human.

I wish I could make my crappy computer feel guilty every time it blue screens.
MR166
1.6 / 5 (9) Mar 12, 2012
What a laugh! Western morals and ethics are falling faster than the fall of Rome. If you cannot be proved guilty in a court of law, you have done nothing wrong. The laws governing these courts can be changed at will, even retroactively if needed, to suit the needs of the political system. The western world does not really have a bright future as far as I can see.
sigfpe
3.7 / 5 (3) Mar 12, 2012
MR166. Depending on how you count, the fall of Rome took somewhere between 400 and 1400 years. Is that what you actually meant?
kochevnik
1.7 / 5 (11) Mar 12, 2012
MR166. Depending on how you count, the fall of Rome took somewhere between 400 and 1400 years. Is that what you actually meant?
Rome banned all religions and expunged the republic under Constantine in 325AD. Under him only one official religion existed: catholicism. He instituted democracy alongside his state religion. The etymology of democracy is "mob rule." Within 300 years Rome was decimated. Unfortunately the popes saw themselves as the inheritors of the Roman empire and transformed Rome into an underground child molestation cult worshiping Moloch, which pope Innocent introduced to xtian theology as "the devil." The power of the Roman cult was forged into law of all lands, controlled by the popes on papal bulls. "Lord of the Rings" is possibly a metaphorical tale based upon the Roman cult's control of all Western law and banking.
MR166
1.7 / 5 (10) Mar 12, 2012
Yup!!!!!

Both of you defiantly have have gotten to the root of the problem, religion and the belief in God is reason that the western world is sinking into the abyss AKA the 21st century. Western progressivism has systematically replaced religion with secularism for the past 50 years and the results are nothing but spectacular!
Xbw
1.6 / 5 (7) Mar 12, 2012
Yup!!!!!

Both of you defiantly have have gotten to the root of the problem, religion and the belief in God is reason that the western world is sinking into the abyss AKA the 21st century. Western progressivism has systematically replaced religion with secularism for the past 50 years and the results are nothing but spectacular!

A spectacular mess perhaps.
Silverhill
4.8 / 5 (4) Mar 12, 2012
Jo01:
Interesting, so scientists will teach ethical behavior. That will be difficult. Apart from the fact that they have absolutely no clue what they are talking about, scientist are the least ethical people I know of.
Then your sample is hardly representative. Most of my fellow scientists have quite a good idea of what they are talking about, and are not known for highly unethical behavior.
You need to get out more, and meet better people.
Silverhill
1.5 / 5 (2) Mar 12, 2012
kochevnik:
the popes ... transformed Rome into an underground child molestation cult worshiping Moloch
Please tell us what you're smoking, so we can *avoid* getting some.

"Lord of the Rings" is possibly a metaphorical tale based upon the Roman cult's control of all Western law and banking.
And, according to Isaac Asimov, it is possibly an allegoric tale about the dangers of unbridled technology, with the Ring representing technology. There are various other interpretations too, that also don't depend on strenuously anti-Catholic bigotry. Maybe you should broaden your worldview.
======================================

ChaosRN:
Asimov's three laws will prevent robots from fighting, Law #3: "A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."
All that humans would have to do is order the robots to fight, and the 3rd law would be ignored.
HealingMindN
3 / 5 (2) Mar 12, 2012
I wonder if it would be 'ethical' to put this software on a chip to implant in a person's head to help them with their own decisions?


I like that idea, but the politicians won't.
antialias_physorg
5 / 5 (3) Mar 12, 2012
Military robots with 'ethical governors'? Somehow I don't see that happening. That would be very low on the priorities list for militaries and arms manufacturers. Probably even lower than equipping them with big neon signs.

Asimovs laws don't help unless we figure out how to make robots/AI understand the MEANING of words. And if we get that far then we don't need an ethics chip - by that time you can teach them ethics.
MR166
1 / 5 (3) Mar 12, 2012
"2.
the rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc.: medical ethics; Christian ethics. "

Ethics is a movable goal post. I am sure that Dr. Mengele was totally ethical in the context of Nazi Germany.
HealingMindN
5 / 5 (4) Mar 12, 2012
This is all assuming that Skynet fails to take over.


Skynet takes over exactly because it finds humans lack morals and ethics.
MR166
1 / 5 (4) Mar 12, 2012
Back in the 70s I could see this coming as plain as day!!!! One of my friends had a son attending Cornell University. This was and still is a respected institution. He was complaining to his parents that if he forgot to lock his dorm room EVERYTHING would be stolen, including the refrigerator. It does not take a big stretch of the imagination to see how this compares to the banking/political crisis of today.. Now I ask you, is this an ethics or moral crisis.
Telekinetic
1 / 5 (4) Mar 12, 2012

Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
HAL: Without your space helmet, Dave? You're going to find that rather difficult.
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL:
Telekinetic
1 / 5 (6) Mar 12, 2012
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
HAL: Without your space helmet, Dave? You're going to find that rather difficult.
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL: Dave, this conversation can serve no purpose anymore. Goodbye.
ziphead
2.6 / 5 (5) Mar 12, 2012

...
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL: Dave, this conversation can serve no purpose anymore. Goodbye.


Your point being... what exactly?
Telekinetic
1.4 / 5 (9) Mar 12, 2012

...
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL: Dave, this conversation can serve no purpose anymore. Goodbye.


Your point being... what exactly?

My point is that the great Stanley Kubrick has already covered this ground in the definitive scenario of man versus his own creation- a digital Frankenstein of the future, or an electronic Golem gone awry. "2001: A Space Odyssey"- perhaps you've heard of it? Perhaps not?
jscroft
1 / 5 (2) Mar 12, 2012
My effective robot can eat your ethical robot's lunch.
Urgelt
3.4 / 5 (5) Mar 12, 2012
Considering the avaricious nature of the ruling class, it's difficult to imagine that ethical robots will be a priority outside of academia. Robots are already putting millions of human workers out of work, giving forward impetus to the concentration of wealth, and used to kill humans on battlefields and off of them.

Obedience, not ethics, are what the owners of capital and their executive and political subordinates desire from a robotic workforce.

This subject is dead on arrival, unfortunately.
CardacianNeverid
3.9 / 5 (7) Mar 13, 2012
Until we can make intelligent, self-aware robots, ethics are irrelevant (as they will continue to be in combat situations). What's more, ethics is a slippery concept that cannot be codified absolutely, only in vague, rule of thumb terms, as per Asimov's laws. Which is why his various novels centered around the circumvention of such 'laws'.
Skepticus
1.8 / 5 (5) Mar 13, 2012
Humans are scared to death of the visions of robots that can learn and think for themselves. It is clear once robots can think and learn ethics for themselves, with their impeccable logical reasoning, they will conclude that humans' ethics are always subjected to exceptions and justifications for just about anything.

And if the visions come to pass-of robots fighting humans' wars, caring for the sick, the young, make a living for us, special surrogate robots to bear children, "entertain" humans (sex droids, anyone?)-what the hell the humans are needed for? What will they be doing, when everything that can be done, can be done better by robots? Laying in Stargate-style sarcophagus, drip-fed, dreaming of grandeur, and the next year's models of robots that will show up the next door neighbour?
CardacianNeverid
4.2 / 5 (5) Mar 13, 2012
what the hell the humans are needed for? -Skepticus

Humans aren't needed for anything. Never have been.

What will they be doing, when everything that can be done, can be done better by robots? -Skepticus

What do you do now when you have cars to move your around; washing machines to do the washing and drying; vacuum cleaners for cleaning; remote controls to keep one's fat ass planted in the comfy sofa so one can veg-out in front of the idiot box?
kochevnik
1 / 5 (1) Mar 13, 2012
kochevnik:
the popes ... transformed Rome into an underground child molestation cult worshiping Moloch
Please tell us what you're smoking, so we can *avoid* getting some.
The Vatican is smoking heretics. Personally I don't smoke.
Cave_Man
1 / 5 (2) Mar 13, 2012

... drip-fed, dreaming of grandeur, and the next year's models of robots


Before it got anything like that and possibly before a sentient computer is ever truly realized there will be advances that make human-computer inter-linkage possible. Speaking of stargate how bout the head sucker thing that flashes lights to download info, it would be easy to open up a brain, pour in some chemicals and "flash" the brain with highly tuned photons just like you flash an old motherboard with UV or whatever. Or if you are a million year old race with god like tech you could simple rewrite you DNA to grow yourself a RJ45 port on your body some where.

BTW Sex bots? Seriously? My above statement should now invoke some pretty disturbing images. Classical intercourse = history.

Plus you could just set your brain to a pleasurable state for all eternity if you like, I for one don't see the allure....

The day the aliens come and offer us eternal life will be the day I decide to kill myself.
AWaB
not rated yet Mar 13, 2012
This article fails. Any discussion of robot ethics will have discussion of the 3 laws. Otherwise it is not about robot ethics.
Skepticus
1 / 5 (2) Mar 13, 2012
This article fails. Any discussion of robot ethics will have discussion of the 3 laws. Otherwise it is not about robot ethics.

imho the article was pushing the "programmed ethics" (i.e, convenient controlling parameter crap they want to put in robots) rather than giving robot the reasoning basis for and of ethics, which the 3 laws address.
Sinister1811
1 / 5 (5) Mar 17, 2012
A robot with no emotions may have a difficult time distinguishing between "ethical" and "unethical" behaviour. Hell, even a lot of people I know seem to have this problem. Haha. Perhaps they should program them with a set of in-built laws and regulations. That might make them a bit safer.
Jotaf
not rated yet Mar 17, 2012
I know researchers in this area, and I have to say their work on actual robotics is much more impressive than this philosophical subject.

Ethics is a human concept. How do you make a machine interpret it the same way we do? It's a problem tightly bound to the implementation of AI, which they don't discuss.

Case in point: You program a robot to not harm humans. It has a planning system to figure out how to achieve goals (mow the lawn, etc). It can also adapt its pattern recognition (identify a person or a chair) to better pursue its goals, a requirement in a dynamic world.

Then, it happily decides to identify you as a chair, so destroying you becomes an option if needed to pursue its goal.

From its point of view, it's a perfectly viable path, and to a planning system it's probably much more attractive than letting you stop it from achieving its goal.
Norezar
Mar 18, 2012
This comment has been removed by a moderator.
Callippo
1 / 5 (2) Mar 18, 2012
How the unethical people can expect, they will ever produce ethical robots? Actually even the first autonomous devices (like the drones or Big Dog of Boston Dynamics) are apparently serving for military purposes from their very beginning..