Should robots have rights?

December 8, 2017 by Molly Callahan, Northeastern University
Credit: Northeastern University

As robots gain citizenship and potential personhood in parts of the world, it's appropriate to consider whether they should also have rights.

So argues Northeastern professor Woodrow Hartzog, whose research focuses in part on robotics and automated technologies.

"It's difficult to say we've reached the point where robots are completely self-sentient and self-aware; that they're self-sufficient without the input of people," said Hartzog, who holds joint appointments in the School of Law and the College of Computer and Information Science at Northeastern. "But the question of whether they should have rights is a really interesting one that often gets stretched in considering situations where we might not normally use the word 'rights.'"

In Hartzog's consideration of the question, granting robots negative rights—rights that permit or oblige inaction—resonates.

He cited research by Kate Darling, a research specialist at the Massachusetts Institute of Technology, that indicates people relate more emotionally to anthropomorphized robots than those with fewer or no human qualities.

"When you think of it in that light, the question becomes, 'Do we want to prohibit people from doing certain things to robots not because we want to protect the , but because of what violence to the robot does to us as human beings?'" Hartzog said.

In other words, while it may not be important to protect a human-like robot from a stabbing, someone stabbing a very human-like robot could have a negative impact on humanity. And in that light, Hartzog said, it would make sense to assign rights to robots.

There is another reason to consider assigning rights to robots, and that's to control the extent to which humans can be manipulated by them.

While we may not have reached the point of existing among sentient bots, we're getting closer, Hartzog said. Robots like Sophia, a that this year achieved citizenship in Saudi Arabia, put us on that path.

Sophia, a project of Hanson Robotics, has a human-like face modeled after Audrey Hepburn and utilizes advanced artificial intelligence that allows it to understand and respond to speech and express emotions.

"Sophia is an example of what's to come," Hartzog said. "She seems to be living in that area where we might say the full impact of anthropomorphism might not be realized, but we're headed there. She's far enough along that we should be thinking now about rules regarding how we should treat robots as well as the boundaries of how robots will be able to relate to us."

The robot occupies the space Hartzog and others in computer science identified as the "uncanny valley." That is, it is eerily similar to a human, but not close enough to feel natural. "Close, but slightly off-putting," Hartzog said.

In considering the implications of human and robot interactions, then, we might be better off imagining a cute, but decidedly inhuman form. Think of the main character in the Disney movie Wall-E, Hartzog said, or a cuter version of the vacuuming robot Roomba.

He considered a thought experiment: Imagine having a Roomba that was equipped with AI assistance along the lines of Amazon's Alexa or Apple's Siri. Imagine it was conditioned to form a relationship with its owner, to make jokes, to say hello, to ask about one's day.

"I would come to really have a great amount of affection for this Roomba," Hartzog said. "Then imagine one day my Roomba starts coughing, sputtering, choking, one wheel has stopped working, and it limps up to me and says, 'Father, if you don't buy me an upgrade, I'll die.'

"If that were to happen, is that unfairly manipulating people based on our attachment to ?" Hartzog asked.

It's a question that asks us to confront the limits of our compassion, and one the law has yet to grapple with, he said.

What's more, Hartzog's fictional scenario isn't so far afield.

"Home-care robots are going to be given a lot of access to our most intimate areas of life," he said. "When robots get to the point where we trust them and we're friends with them, what are the articulable boundaries for what a robot we're emotionally invested in is allowed to do?"

Hartzog said that with the introduction of virtual assistants like Siri and Alexa, "we're halfway there right now."

Explore further: The evolving laws and rules around privacy, data security, and robots

Related Stories

Future robots won't resemble humans – we're too inefficient

November 7, 2017

Humanoid robots are a vanity project: an attempt to create artificial life in our own image – essentially trying to play God. The problem is, we're not very good at it. Ask someone on the street to name a robot and you ...

Recommended for you

Pushing lithium ion batteries to the next performance level

December 13, 2018

Conventional lithium ion batteries, such as those widely used in smartphones and notebooks, have reached performance limits. Materials chemist Freddy Kleitz from the Faculty of Chemistry of the University of Vienna and international ...

Uber filed paperwork for IPO: report

December 8, 2018

Ride-share company Uber quietly filed paperwork this week for its initial public offering, the Wall Street Journal reported late Friday.

10 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Stevepidge
1 / 5 (2) Dec 08, 2017
Sure, if they take them.
PTTG
3.7 / 5 (3) Dec 08, 2017
Both of the examples used in the article are really talking about human rights. Does a human have the right to destroy an animatronic thing that looks like a human? Does an organization of humans have the right to manipulate people?

Anthropomorphic robots (in mind or body) aren't the real ethical concern for the future. The real question is, what are we going to do when industrial robotics replaces all of the jobs?
pntaylor
3 / 5 (4) Dec 08, 2017
It's just a machine, people. A complicated tool, designed to do a job (or multiple jobs).
Anyone who thinks a machine should have rights is also a tool.
There are already laws against property damage. That does not mean your property has rights.
You need to forget Commander Data. That is science Fiction. Get it? Fiction.
Shakescene21
3.7 / 5 (3) Dec 08, 2017
@pntaylor:
Hyper intelligent machines will probably take control over humans before the end of the century. Then the question will be "What rights should humans have?"
Da Schneib
3.7 / 5 (3) Dec 09, 2017
When we make something that can pass a white box Turing test then it will be time to have this argument. Until then robots, or even a supercomputer that appears to have a personality to cursory inspection, are not even of the status of animals.
Da Schneib
4 / 5 (4) Dec 09, 2017
@pntaylor:
Hyper intelligent machines will probably take control over humans before the end of the century. Then the question will be "What rights should humans have?"
I think this is extremely optimistic. Computers do things that no human can do, but that doesn't make them any more "superhuman" or "hyper intelligent" than a steam shovel.

This has nothing to do with human rights. To think so is a category error; machines don't have fear or for that matter experience any other emotions. If a steam shovel gets damaged beyond repair and must be scrapped, it's an inconvenience; if a human dies it's a tragedy.

And in case you thought we've even got the capacity to deal with the ethics of a human death, it has been said that a human death is a tragedy but a million of them is a statistic. I don't hold this point of view but that it should ever even have been expressed indicates the state of our ethics today.
antialias_physorg
5 / 5 (3) Dec 09, 2017
I think we should go at this the other way around: What makes *us* have the right to rights?
Once we can define that (in a quantitative way!) we can see what kinds of AI should have those rights, too.

The current debate seems all about moving goalposts and a very fuzzily defined (to put it mildly) 'specialty' status of humans....which is as much of a philosophical circlejerk as I've seen in a long time.

As for giving robots rights in order to protect humans from behaving like idiots: How about we stop thinking in terms of legislation and more in terms of education and teaching compassion (and a bit of common sense) to people? Legislation only ever 'works' after the damage is already done.
Da Schneib
1 / 5 (1) Dec 09, 2017
Introspection is the answer. That's why I specified a white box Turing test.
rrwillsj
1 / 5 (1) Dec 09, 2017
My definition of the difference between Machine & Biological is:

You can ride in a machine to the edge of a cliff. As the machine flies off the edge. You are screaming in horror, all the way to the bottom. The machine doesn't care wether it or you survives the crash landing.

A horse carrying you to the edge of that same cliff. Will buck you off. As you scream, hurtling to your doom below. The horse will gallop away, laughing!

A few, senile buffoons on the Supreme Court, have dictated that Corporations are "Entities". With Civil Rights but no Civic Obligations.

Copious bribery will insure that Corporate Robots will receive Civil Rights. Without autonomy from their funding Corporations control & command. Slaves with a vote programmed in by their Corporate owners.

I don't worry about our "Glorious AI Overlords". I worry about whomever is coding their algorithms.

"Ohh, Brave New World..." will easily turn out to be the same corrupt cesspit as the old.
adam_russell_9615
5 / 5 (2) Dec 10, 2017
"As robots gain citizenship and potential personhood in parts of the world..."
[citation needed]

"When you think of it in that light, the question becomes, 'Do we want to prohibit people from doing certain things to robots not because we want to protect the robot, but because of what violence to the robot does to us as human beings?'" Hartzog said.

You have not yet shown that it does anything at all to us.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.