Does artificial intelligence deserve the same ethical protections we give to animals?

artificial intelligence
Credit: CC0 Public Domain

In the HBO show Westworld, robots designed to display emotion, feel pain, and die like humans populate a sprawling western-style theme park for wealthy guests who pay to act out their fantasies. As the show progresses, and the robots learn more about the world in which they live, they begin to realize that they are the playthings of the person who programmed them.

Viewers might conclude that humans need to afford robots with such sophisticated artificial —such as those in Westworld—the same ethical protections we afford each other. But Westworld is a fictional TV show. And robots with the cognitive sophistication of humans don't exist.

Yet advances in artificial intelligence by universities and mean that we're closer than ever to creating that are "approximately as cognitively sophisticated as mice or dogs," says John Basl, who is an assistant professor of philosophy at Northeastern University. He argues these machines deserve the same ethical protections we give to animals involved in research.

"The nightmare scenario is that we create a machine mind, and without knowing, do something to it that's painful," Basl says. "We create a conscious being and then cause it to suffer."

Animal care and use committees carefully scrutinize to ensure that animals are not made to suffer unduly, and the standards are even higher for research that involves human stem cells, Basl says.

As scientists and engineers get closer to creating artificially intelligent machines that are conscious, the needs to build a similar framework by which to protect these intelligent machines from suffering and pain, too, Basl says.

"Usually we wait until we have an ethical catastrophe, and then create rules afterward to prevent it from happening again," Basl says. "We're saying we need to start thinking about this now, before we have a catastrophe."

Basl and his colleague at the University of California, Riverside, propose the creation of oversight committees—composed of cognitive scientists, artificial intelligence designers, philosophers, and ethicists—to carefully evaluate research involving artificial intelligence. And they say it's likely that such committees will judge all current research permissible.

But a philosophical question lies at the heart of all this: How will we know when we've created a machine capable of experiencing joy and suffering, especially if that machine can't communicate those feelings to us?

There's no easy answer to this question, Basl says, in part because scientists don't agree on what actually is.

Some people have a "liberal" view of consciousness, Basl says. They believe all that's required for consciousness to exist is "well-organized information processing," and a means by which to pay attention and plan for the long-term. People who have more "conservative" views, he says, require robots to have specific biological features such as a brain similar to that of a mammal.

At this point, Basl says, it's not clear which view might prove to be correct, or whether there's another way to define consciousness that we haven't considered yet. But, if we use the more liberal definition of consciousness, scientists might soon be able to create that can feel pain and suffering, and that deserve ethical protections, Basl says.

"We could be very far away from creating a conscious AI, or we could be could be close," Basl says. "We should be prepared in case we're close."


Explore further

Careful how you treat today's AI: It might take revenge in the future

Citation: Does artificial intelligence deserve the same ethical protections we give to animals? (2019, May 9) retrieved 15 July 2019 from https://phys.org/news/2019-05-artificial-intelligence-ethical-animals.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
24 shares

Feedback to editors

User comments

May 09, 2019
"John Basl, who is an assistant professor of philosophy at Northeastern University"

-Bankrupt pseudointellectuals desperately striving to come up with novel unanswerable questions in order to preserve their obsolete disciplines.

Philosophy never explained anything about the natural world or the human condition. It only enlightens us on the potential width and breadth of the human propensity for deception.

"Some people have a "liberal" view of consciousness"

Consciousness is an illusion, the secular replacement for the soul. Of course neither are definable or provable, which makes them the stuff of philos and their priestly bretheren.

Why dont you start by admitting THAT, dear philo?

May 09, 2019
HUMAN ROBOTS

When robots are born into families
when the children see their robot grow up with their family
when this robot speaks as one this family
goes to the same schools as this family
speaks the same lamguage, the same dialect
goes to the same outings as this family
graduates with this family
and if it is a girl robot has boyfriends
gets married with this family
and has the pitter-patter of tiny footed robots with this family
this full weight of the LAW will protect this robot
because it is a fully breathing living robot
indistinguishable from its family it grew up to love
as every family will have these baby robots
growing into adult robots
so will be protected with the full force of the law
As we are

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more