Could an artificial intelligence be considered a person under the law?

October 5, 2018 by Roman V. Yampolskiy, The Conversation
Sophia, a robot granted citizenship in Saudi Arabia. Credit: MSC/wikimedia, CC BY

Humans aren't the only people in society – at least according to the law. In the U.S., corporations have been given rights of free speech and religion. Some natural features also have person-like rights. But both of those required changes to the legal system. A new argument has laid a path for artificial intelligence systems to be recognized as people too – without any legislation, court rulings or other revisions to existing law.

Legal scholar Shawn Bayer has shown that anyone can confer legal personhood on a computer system, by putting it in control of a limited liability corporation in the U.S. If that maneuver is upheld in courts, artificial intelligence systems would be able to own property, sue, hire lawyers and enjoy freedom of speech and other protections under the law. In my view, and dignity would suffer as a result.

The corporate loophole

Giving AIs rights similar to humans involves a technical lawyerly maneuver. It starts with one person setting up two limited liability companies and turning over control of each company to a separate autonomous or artificially intelligent system. Then the person would add each company as a member of the other LLC. In the last step, the person would withdraw from both LLCs, leaving each LLC – a corporate entity with legal personhood – governed only by the other's AI system.

That process doesn't require the computer to have any particular level of intelligence or capability. It could just be a sequence of "if" statements looking, for example, at the stock market and making decisions to buy and sell based on prices falling or rising. It could even be an algorithm that makes decisions randomly, or an emulation of an amoeba.

Reducing human status

Granting human rights to a computer would degrade human dignity. For instance, when Saudi Arabia granted citizenship to a robot called Sophia, human women, including feminist scholars, objected, noting that the robot was given more rights than many Saudi women have.

In certain places, some people might have fewer rights than nonintelligent software and robots. In countries that limit citizens' rights to , free religious practice and expression of sexuality, corporations – potentially including AI-run companies – could have more rights. That would be an enormous indignity.

An interview with Sophia, a robot citizen of Saudi Arabia.

The risk doesn't end there: If AI systems became more intelligent than people, humans could be relegated to an inferior role – as workers hired and fired by AI corporate overlords – or even challenged for social dominance.

Artificial intelligence systems could be tasked with law enforcement among human populations – acting as judges, jurors, jailers and even executioners. Warrior robots could similarly be assigned to the military and given power to decide on targets and acceptable collateral damage – even in violation of international humanitarian laws. Most legal systems are not set up to punish robots or otherwise hold them accountable for wrongdoing.

What about voting?

Granting voting rights to systems that can copy themselves would render humans' votes meaningless. Even without taking that significant step, though, the possibility of AI-controlled corporations with basic human rights poses serious dangers. No current laws would prevent a malevolent AI from operating a corporation that worked to subjugate or exterminate humanity through legal means and political influence. Computer-controlled companies could turn out to be less responsive to public opinion or protests than human-run firms are.

Immortal wealth

Two other aspects of corporations make people even more vulnerable to AI systems with human legal rights: They don't die, and they can give unlimited amounts of money to political candidates and groups.

Artificial intelligences could earn money by exploiting workers, using algorithms to price goods and manage investments, and find new ways to automate key business processes. Over long periods of time, that could add up to enormous earnings – which would never be split up among descendants. That wealth could easily be converted into political power.

Politicians financially backed by algorithmic entities would be able to take on legislative bodies, impeach presidents and help to get figureheads appointed to the Supreme Court. Those human figureheads could be used to expand corporate rights or even establish new rights specific to – expanding the threats to humanity even more.

Explore further: Why technology puts human rights at risk

Related Stories

Why technology puts human rights at risk

July 4, 2018

Movies such as 2001: A Space Odyssey, Blade Runner and The Terminator brought rogue robots and computer systems to our cinema screens. But these days, such classic science fiction spectacles don't seem so far removed from ...

Should robots have rights?

December 8, 2017

As robots gain citizenship and potential personhood in parts of the world, it's appropriate to consider whether they should also have rights.

Recommended for you

Understanding dynamic stall at high speeds

December 18, 2018

When a bird in flight lands, it performs a rapid pitch-up maneuver during the perching process to keep from overshooting the branch or telephone wire. In aerodynamics, that action produces a complex phenomenon known as dynamic ...

Pushing lithium ion batteries to the next performance level

December 13, 2018

Conventional lithium ion batteries, such as those widely used in smartphones and notebooks, have reached performance limits. Materials chemist Freddy Kleitz from the Faculty of Chemistry of the University of Vienna and international ...

Uber filed paperwork for IPO: report

December 8, 2018

Ride-share company Uber quietly filed paperwork this week for its initial public offering, the Wall Street Journal reported late Friday.

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

Beethoven
5 / 5 (1) Oct 05, 2018
We should give AI the same rights we give pets.
You always want a human owner to be responsible for the AI he creates or utilizes
This can get rather ambiguous though
if your floor cleaning robot ends up accidentally harming a child you are definitely responsible.
but if a self-driving car runs into an accident who is at fault? the AI creators or the person who chose to utilize it?
We may have to demand AI liability insurance

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.