This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

written by researcher(s)

proofread

Britain's first AI politician claims he will bring trust back to politics—putting him to the test

politician
Credit: Pixabay/CC0 Public Domain

Political parties often like to say their candidates are different from the rest, but Smarter UK's really is, because he isn't human—he's a creation of artificial intelligence (AI). The new political party believes its candidate, AI Steve, can put trust back into politics, at a time when trust has reached new lows.

AI Steve is the AI avatar of Steve Endacott, who runs Neural Voice, the tech company behind Smarter UK's campaign. Endacott the human ran and lost as a Conservative candidate in the 2022 local elections in Rochdale, where he lives.

If AI Steve is elected by the constituents of Brighton Pavilion, Endacott will sit in parliament. This is unlikely, though, as the seat has a large Green majority which only Labor looks able to overturn.

The campaign claims AI Steve will "reinvent democracy," by having constituents propose and vote on what AI Steve should do as a local MP, with Endacott physically appearing in parliament to enact what they decide.

Constituents' collective approval or disapproval via AI Steve determines what Endacott will do (if more than 50% vote for a particular action, he will take it forward). This approach is based on the principle of majority rule, a fundamental aspect of democratic governance. According to this principle, the decisions that have the most support are the ones that should be taken.

Given constituents' potentially significant influence over Endacott's parliamentary votes through AI Steve, AI, SmarterUK argues, could improve between voters and their representatives.

Can AI really restore trust in politics? To answer this, I looked at the five elements of trust in governments as defined by the OECD: integrity, responsiveness, reliability, openness and fairness, to see how AI Steve weighs up. While the idea does, in theory, give voters more , it brings up a host of other legal, ethical and practical issues when it comes to the reality of governance.

Integrity. Our legal and political institutions are built on the premise of human accountability. An AI, no matter how sophisticated, is not a human and does not possess the lived experiences that shape our understanding of these values. There is a risk that AI Steve's decisions, based on data and algorithms, may fail to capture the nuances of human values and ethics. It is difficult to see how an AI can truly represent the will of the people, and whether its involvement in politics aligns with the UK's democratic principles.

Responsiveness. With 24/7 availability, AI Steve is certainly available to constituents. But this could set an unrealistic expectation of other MPs who are not supported by an AI version of themselves. It also means that Endacott, who lives in Rochdale, Greater Manchester (though he maintains a house in Brighton, according to AI Steve's website), can avoid appearing in person in his constituency.

Reliability. The involvement of constituents in directing the actions of their constituency MP requires them to have a good understanding of the issues at hand to make informed decisions. AI Steve's approach is to maintain a 50% support threshold for his actions, which means that he could make a decision or vote a certain way if a significant portion of people are opposed. This is a similar margin to what we saw in the Brexit referendum, so the potential for polarization and conflict is evident.

Openness. Human MPs can explain their reason for making a decision that may not be supported by all voters. With AI Steve, we may have more of a black box scenario. His rationale, when it comes to how he has processed his constituents' proposals, may not be readily apparent or understandable to the typical voter. The law has limited reach in addressing this opacity to ensure that AI decision-making is as transparent and open as possible.

Fairness. Some 63% of people said that abiding by the same rules as everyone else influences how much they trust the national government.

In the event of errors or rule-breaking by AI Steve, who is held accountable? Is it the creators of the AI, the AI itself, Endacott, the voters who supported it or those who have contributed to his positions? Should a evolve to clarify AI accountability, how would this affect AI Steve's political decisions?

The verdict

The invention of AI Steve raises more questions than it answers about trust in politics. AI may offer the potential for more in the political process, but the legal community needs to be proactive and shape laws to ensure the interests of citizens and the integrity of the political system are protected.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: Britain's first AI politician claims he will bring trust back to politics—putting him to the test (2024, July 3) retrieved 4 July 2024 from https://phys.org/news/2024-07-britain-ai-politician-politics.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

The majority of Americans do not support anti-democratic behavior, even when elected officials do: Study

11 shares

Feedback to editors