The authors, Ronald Craig Arkin, Regents’ Professor and Director of the Mobile Robot Laboratory at the Georgia Institute of Technology in Atlanta, Georgia, along with researchers Patrick Ulam and Alan R. Wagner, have published their overview of moral decision making in autonomous systems in a recent issue of the Proceedings of the IEEE.
“Probably at the highest level, the most important message is that people need to start to think and talk about these issues, and some are more pressing than others,” Arkin told PhysOrg.com. “More folks are becoming aware, and the very young machine and robot ethics communities are beginning to grow. They are still in their infancy though, but a new generation of researchers should help provide additional momentum. Hopefully articles such as the one we wrote will help focus attention on that.”
The big question, according to the researchers, is how we can ensure that future robotic technology preserves our humanity and our societies’ values. They explain that, while there is no simple answer, a few techniques could be useful for enforcing ethical behavior in robots.
One method involves an “ethical governor,” a name inspired by the mechanical governor for the steam engine, which ensured that the powerful engines behaved safely and within predefined bounds of performance. Similarly, an ethical governor would ensure that robot behavior would stay within predefined ethical bounds. For example, for autonomous military robots, these bounds would include principles derived from the Geneva Conventions and other rules of engagement that humans use. Civilian robots would have different sets of bounds specific to their purposes.
Since it’s not enough just to know what’s forbidden, the researchers say that autonomous robots must also need emotions to motivate behavior modification. One of the most important emotions for robots to have would be guilt, which a robot would “feel” or produce whenever it violates its ethical constraints imposed by the governor, or when criticized by a human. Philosophers and psychologists consider guilt as a critical motivator of moral behavior, as it leads to behavior modifications based on the consequences of previous actions. The researchers here propose that, when a robot’s guilt value exceeds specified thresholds, the robot’s abilities may be temporarily restricted (for example, military robots might not have access to certain weapons).
Though it may seem surprising at first, the researchers suggest that robots should also have the ability to deceive people – for appropriate reasons and in appropriate ways – in order to be truly ethical. They note that, in the animal world, deception indicates social intelligence and can have benefits under the right circumstances. For instance, search-and-rescue robots may need to deceive in order to calm or gain cooperation from a panicking victim. Robots that care for Alzheimer’s patients may need to deceive in order to administer treatment. In such situations, the use of deception is morally warranted, although teaching robots to act deceitfully and appropriately will be challenging.
The final point that the researchers touch on in their overview is ensuring that robots – especially those that care for children and the elderly – respect human dignity, including human autonomy, privacy, identity, and other basic human rights. The researchers note that this issue has been largely overlooked in previous research on robot ethics, which mostly focuses on physical safety. Ensuring that robots respect human dignity will likely require interdisciplinary input.
The researchers predict that enforcing ethical behavior in robots will face challenges in many different areas.
“In some cases it's perception, such as discrimination of combatant or non-combatant in the battlespace,” Arkin said. “In other cases, ethical reasoning will require a deeper understanding of human moral reasoning processes, and the difficulty in many domains of defining just what ethical behavior is. There are also cross-cultural differences which need to be accounted for.”
An unexpected benefit from developing an ethical advisor for robots is that the advising might assist humans when facing ethically challenging decisions, as well. Computerized ethical advising already exists for law and bioethics, and similar computational machinery might also enhance ethical behavior in human-human relationships.
“Perhaps if robots could act as role models in situations where humans have difficulty acting in accord with moral standards, this could positively reinforce ethical behavior in people, but that's an unproven hypothesis,” Arkin said.
Explore further:
Researchers give robots the capability for deceptive behavior
More information:
Ronald Craig Arkin, et al. “Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception.” Proceedings of the IEEE. Vol. 100, No. 3, March 2012. DOI: 10.1109/JPROC2011.2173265
Yellowdart
Kinedryl
patnclaire
hyongx
ChaosRN
Asimov's three laws will prevent robots from fighting, Law #3: "A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."
Deadbolt
Given that robots have not gone through the evolution that we have, and could possess any emotions and any possible mind the field of mindspaces, we could always make it so that they ENJOY being beaten if we so perversely chose.
Once we understand what patterns in minds correspond to emotions, we could make it so that these patterns match up with non-evolutionarily fit behaviors, such as enjoying killing yourself. We could make it so that robots enjoy serving humans no matter the cost.
Xbw
I wish I could make my crappy computer feel guilty every time it blue screens.
MR166
sigfpe
kochevnik
MR166
Both of you defiantly have have gotten to the root of the problem, religion and the belief in God is reason that the western world is sinking into the abyss AKA the 21st century. Western progressivism has systematically replaced religion with secularism for the past 50 years and the results are nothing but spectacular!
Xbw
A spectacular mess perhaps.
Silverhill
You need to get out more, and meet better people.
Silverhill
And, according to Isaac Asimov, it is possibly an allegoric tale about the dangers of unbridled technology, with the Ring representing technology. There are various other interpretations too, that also don't depend on strenuously anti-Catholic bigotry. Maybe you should broaden your worldview.
======================================
ChaosRN: All that humans would have to do is order the robots to fight, and the 3rd law would be ignored.
HealingMindN
I like that idea, but the politicians won't.
antialias_physorg
Asimovs laws don't help unless we figure out how to make robots/AI understand the MEANING of words. And if we get that far then we don't need an ethics chip - by that time you can teach them ethics.
MR166
the rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc.: medical ethics; Christian ethics. "
Ethics is a movable goal post. I am sure that Dr. Mengele was totally ethical in the context of Nazi Germany.
HealingMindN
Skynet takes over exactly because it finds humans lack morals and ethics.
MR166
Telekinetic
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
HAL: Without your space helmet, Dave? You're going to find that rather difficult.
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL:
Telekinetic
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
HAL: Without your space helmet, Dave? You're going to find that rather difficult.
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL: Dave, this conversation can serve no purpose anymore. Goodbye.
ziphead
Your point being... what exactly?
Telekinetic
My point is that the great Stanley Kubrick has already covered this ground in the definitive scenario of man versus his own creation- a digital Frankenstein of the future, or an electronic Golem gone awry. "2001: A Space Odyssey"- perhaps you've heard of it? Perhaps not?
jscroft
Urgelt
Obedience, not ethics, are what the owners of capital and their executive and political subordinates desire from a robotic workforce.
This subject is dead on arrival, unfortunately.
CardacianNeverid
Skepticus
And if the visions come to pass-of robots fighting humans' wars, caring for the sick, the young, make a living for us, special surrogate robots to bear children, "entertain" humans (sex droids, anyone?)-what the hell the humans are needed for? What will they be doing, when everything that can be done, can be done better by robots? Laying in Stargate-style sarcophagus, drip-fed, dreaming of grandeur, and the next year's models of robots that will show up the next door neighbour?
CardacianNeverid
Humans aren't needed for anything. Never have been.
What do you do now when you have cars to move your around; washing machines to do the washing and drying; vacuum cleaners for cleaning; remote controls to keep one's fat ass planted in the comfy sofa so one can veg-out in front of the idiot box?
kochevnik
Cave_Man
Before it got anything like that and possibly before a sentient computer is ever truly realized there will be advances that make human-computer inter-linkage possible. Speaking of stargate how bout the head sucker thing that flashes lights to download info, it would be easy to open up a brain, pour in some chemicals and "flash" the brain with highly tuned photons just like you flash an old motherboard with UV or whatever. Or if you are a million year old race with god like tech you could simple rewrite you DNA to grow yourself a RJ45 port on your body some where.
BTW Sex bots? Seriously? My above statement should now invoke some pretty disturbing images. Classical intercourse = history.
Plus you could just set your brain to a pleasurable state for all eternity if you like, I for one don't see the allure....
The day the aliens come and offer us eternal life will be the day I decide to kill myself.
AWaB
Skepticus
imho the article was pushing the "programmed ethics" (i.e, convenient controlling parameter crap they want to put in robots) rather than giving robot the reasoning basis for and of ethics, which the 3 laws address.
Sinister1811
Jotaf
Ethics is a human concept. How do you make a machine interpret it the same way we do? It's a problem tightly bound to the implementation of AI, which they don't discuss.
Case in point: You program a robot to not harm humans. It has a planning system to figure out how to achieve goals (mow the lawn, etc). It can also adapt its pattern recognition (identify a person or a chair) to better pursue its goals, a requirement in a dynamic world.
Then, it happily decides to identify you as a chair, so destroying you becomes an option if needed to pursue its goal.
From its point of view, it's a perfectly viable path, and to a planning system it's probably much more attractive than letting you stop it from achieving its goal.
Norezar
Mar 18, 2012Callippo