The ethical robot (w/ Video)

Nov 09, 2010 By Christine Buckley and Bret Eckhardt
Susan and Michael Anderson have programmed a robot to behave ethically. Image by Bret Eckhardt

(PhysOrg.com) -- Philosopher Susan Anderson is teaching machines how to behave ethically.

Professor emerita Susan Anderson and her research partner, husband Michael Anderson of the University of Hartford, a University of Connecticut alumnus, at first seem to have little in common when it comes to their academic lives: she's a , he’s a computer scientist.

But these two seemingly opposite fields have come together in the Andersons’ collaborative work, in which the team works in a new field of research, called machine ethics, that’s only about 10 years old.

This video is not supported by your browser at this time.

Using their expertise in different areas, the Andersons have recently accomplished something that’s never been done before: They’ve programmed a robot to behave ethically.

“There are machines out there that are already doing things that have ethical import, such as automatic cash withdrawal machines, and many others in the development stages, such as cars that can drive themselves and eldercare robots,” says Susan, professor emerita of philosophy in the College of Liberal Arts and Sciences, who taught at UConn’s Stamford campus. “Don’t we want to make sure they behave ethically?”

The field of machine ethics combines techniques with ethical theory, a branch of philosophy, to determine how to program machines to behave in an ethical manner. But there is currently no agreement, says Susan, as to which ethical principles should be programmed into machines.

In 1930, Scottish philosopher David Ross introduced a new approach to ethics, she says, called the prima facie duty approach, in which a person must balance many different obligations when deciding how to act in a moral way – obligations like being just, doing good, not causing harm, keeping one’s promises, and showing gratitude.

The robot the Andersons use in their research has been programmed with an ethical principle. Image by Bret Eckhardt

However, this approach was never developed far enough to instruct people how to weigh these different obligations with a satisfactory decision principle: one that would instruct them on how to behave when several of the prima facie duties pull in different directions.

“There isn’t a decision principle within this theory, so it wasn’t widely adopted,” says Susan.

That’s where the Andersons come in. By using information about specific ethical dilemmas supplied to them by ethicists, computers can effectively “learn” ethical principles in a process called machine learning.

The toddler-sized robot they have been using in their research, called Nao, has been programmed with an ethical principle that was discovered by a computer. This learned principle allows their robot to determine how often to remind people to take their medicine and when to notify an overseer, such as a doctor, when they don’t comply.

Reminding someone to take their medicine may seem relatively trivial, but the field of biomedical ethics has grown in relevance and importance since the 1960s. And robots are currently being designed to assist the elderly, so the Andersons’ research has very practical implications.

Susan points out that there are several prima facie duties the robot must weigh in their scenario: enabling the patient to receive potential benefits from taking the medicine, preventing harm to the patient that might result from not taking the medication, and respecting the person’s right of autonomy. These prima facie duties must be correctly balanced to help the decide when to remind the patient to take medication and whether to leave the person alone or to inform a caregiver, such as a doctor, if the person has refused to take the medicine.

Philosopher Susan Anderson believes artificial intelligence has changed the field of ethics. Image by Bret Eckhardt

Michael says that although their research is in its early stages, it’s important to think about ethics alongside developing artificial intelligence. Above all, he and Susan want to refute the science fiction portrayal of robots harming human beings.

“We should think about the things that robots could do for us if they had ethics inside them,” Michael says. “We’d allow them to do more things for us, and we’d trust them more.”

The Andersons organized the first international conference on machine ethics in 2005, and they have a book on machine ethics being published by Cambridge University Press. In the future, they envision computers continuing to engage in machine learning of ethics through dialogues with ethicists concerning real ethical dilemmas that machines might face in particular environments.

“Machines would effectively learn the ethically relevant features, prima facie duties, and ultimately the decision principles that should govern their behavior in those domains,” says Susan.

Although this is a vision of the future of machine ethics research, Susan thinks that artificial intelligence has already changed her chosen field in major ways.

She thinks that working in machine ethics, which forces philosophers who are used to thinking abstractly to be more precise in applying ethics to specific, real-life cases, might actually advance the study of ethics.

And she believes that robots could be good for humanity: she believes that interacting with robots that have been programmed to behave ethically could even inspire humans to behave more ethically.

Explore further: SRI microrobots show fast-building factory approach (w/ video)

Related Stories

Modern society made up of all types

Nov 04, 2010

Modern society has an intense interest in classifying people into ‘types’, according to a University of Melbourne Cultural Historian, leading to potentially catastrophic life-changing outcomes for those typed – ...

Virtual engineer to predict machine failure

Oct 26, 2010

Scientists at the University of Portsmouth have created a ‘virtual engineer’ which uses artificial intelligence techniques to predict when machines need repairing.

Robot, object, action!

Oct 29, 2010

Robotic demonstrators developed by European researchers produce compelling evidence that ‘thinking-by-doing’ is the machine cognition paradigm of the future. Robots act on objects and teach themselves ...

Consumer confidence hits five-year high in Michigan

Oct 27, 2010

(PhysOrg.com) -- Despite Michigan’s continued economic malaise, residents’ optimism about the future is at its highest in nearly five years, according to Michigan State University’s latest State of the State ...

Recommended for you

A robot dives into search for Malaysian Airlines flight

2 hours ago

In the hunt for signs of Malaysian Airlines flight MH370—which disappeared on March 8 after deviating for unknown reasons from its scheduled flight path—all eyes today turn to a company that got its start ...

Simplicity is key to co-operative robots

Apr 16, 2014

A way of making hundreds—or even thousands—of tiny robots cluster to carry out tasks without using any memory or processing power has been developed by engineers at the University of Sheffield, UK.

Students turn $250 wheelchair into geo-positioning robot

Apr 16, 2014

Talk about your Craigslist finds! A team of student employees at The University of Alabama in Huntsville's Systems Management and Production Center (SMAP) combined inspiration with innovation to make a $250 ...

Using robots to study evolution

Apr 14, 2014

A new paper by OIST's Neural Computation Unit has demonstrated the usefulness of robots in studying evolution. Published in PLOS ONE, Stefan Elfwing, a researcher in Professor Kenji Doya's Unit, has succes ...

User comments : 10

Adjust slider to filter visible comments by rank

Display comments: newest first

DamienS
5 / 5 (2) Nov 09, 2010
the Andersons have recently accomplished something that’s never been done before: They’ve programmed a robot to behave ethically.

You might want to delay 'programming' ethics until you can program general intelligence and self awareness.
danlgarmstrong
5 / 5 (1) Nov 09, 2010
Why delay? General intelligence will be built from MANY distinct 'modules' - ethics sounding like a pretty important part.
DamienS
4 / 5 (4) Nov 09, 2010
Why delay? General intelligence will be built from MANY distinct 'modules' - ethics sounding like a pretty important part.

Because you cannot build specific, isolated modules from the top-down and then somehow interconnect them to form a general intelligence. An AGI needs to be built from the ground up as an emergent property through learning and interacting with the physical environment. AGI has been stuck with the top-down approach since the field's inception. You need to raise an AGI much like you would raise a child, once the hardware and the software is up to scratch.
LostinSpaceman
not rated yet Nov 09, 2010
First off, who says you can't? And second, even going from the ground up, you STILL need to have the proper theories and codework to add ethics to a system, so why stifle progress?
Thrasymachus
4.3 / 5 (6) Nov 09, 2010
This is not a machine that acts ethically. This is a machine that uses a decision procedure borrowed from a pretty bad theory of ethics. In order to act ethically, one MUST have self-awareness. I'm not saying it's a bad thing to imitate moral behavior, but an imitation of moral behavior is not moral behavior, not any more than a parrot that says "hello" every time the phone rings is trying to answer the phone.
trekgeek1
not rated yet Nov 09, 2010
Yeah, I agree. I saw no display of ethics in this video. Robot was given a bottle which is the primer for a sequence of events----> Robot visually acquires a target---> offer bottle------>

if (they refuse)
Accept their response

else
Hand them the bottle

end

There was nothing special here. I saw ASIMO do something similar at a demonstration at Disneyland. The robot reminded the woman to order a pizza for dinner. What a little saint, isn't he? Or he was just programmed to restate the task at a later time.
dorisor
not rated yet Nov 10, 2010
Thanks for sharing this. I will soon have a used Fujitsu B Series touch screen pc.
HealingMindN
5 / 5 (1) Nov 10, 2010
Interesting how they seem to be focused on using this bot as a medication reminder. Doesn't that make it more of an enforcer / big brother bot rather than an ethics bot? Personally, I wouldn't want that thing around tell me what to do. I have cats who do that already.
913spiffy
not rated yet Nov 14, 2010
There's some Great Potential Here. Let's Keep MOVING. Ethics for EveryThing. I'd love this. 0=)
graytay
not rated yet Dec 08, 2010
Good discusion, but science is always raised from the fundation, a prototype.

More news stories

Under some LED bulbs whites aren't 'whiter than white'

For years, companies have been adding whiteners to laundry detergent, paints, plastics, paper and fabrics to make whites look "whiter than white," but now, with a switch away from incandescent and fluorescent lighting, different ...

Researchers uncover likely creator of Bitcoin

The primary author of the celebrated Bitcoin paper, and therefore probable creator of Bitcoin, is most likely Nick Szabo, a blogger and former George Washington University law professor, according to students ...

Continents may be a key feature of Super-Earths

Huge Earth-like planets that have both continents and oceans may be better at harboring extraterrestrial life than those that are water-only worlds. A new study gives hope for the possibility that many super-Earth ...

Researchers successfully clone adult human stem cells

(Phys.org) —An international team of researchers, led by Robert Lanza, of Advanced Cell Technology, has announced that they have performed the first successful cloning of adult human skin cells into stem ...