How artificial intelligence systems could threaten democracy

ai
Credit: CC0 Public Domain

U.S. technology giant Microsoft has teamed up with a Chinese military university to develop artificial intelligence systems that could potentially enhance government surveillance and censorship capabilities. Two U.S. senators publicly condemned the partnership, but what the National Defense Technology University of China wants from Microsoft isn't the only concern.

As my research shows, the advent of digital repression is profoundly affecting the relationship between citizen and state. New technologies are arming governments with unprecedented capabilities to monitor, track and surveil individual people. Even governments in democracies with strong traditions of rule of law find themselves tempted to abuse these new abilities.

In states with unaccountable institutions and frequent human rights abuses, AI systems will most likely cause greater damage. China is a prominent example. Its leadership has enthusiastically embraced AI technologies, and has set up the world's most sophisticated surveillance state in Xinjiang province, tracking citizens' daily movements and smartphone use.

Its exploitation of these technologies presents a chilling model for fellow autocrats and poses a direct threat to open democratic societies. Although there's no evidence that other governments have replicated this level of AI , Chinese companies are actively exporting the same underlying technologies across the world.

Increasing reliance on AI tools in the US

Artificial intelligence systems are everywhere in the modern world, helping run smartphones, internet search engines, digital voice assistants and Netflix movie queues. Many people fail to realize how quickly AI is expanding, thanks to ever-increasing amounts of data to be analyzed, improving algorithms and advanced computer chips.

Any time more information becomes available and analysis gets easier, governments are interested – and not just authoritarian ones. In the U.S., for instance, the 1970s saw revelations that government agencies – such as the FBI, CIA and NSA – had set up expansive domestic surveillance networks to monitor and harass civil rights protesters, political activists and Native American groups. These issues haven't gone away: Digital today has deepened the ability of even more agencies to conduct even more intrusive surveillance.

For example, U.S. police have eagerly embraced AI technologies. They have begun using software that is meant to predict where crimes will happen to decide where to send officers on patrol. They're also using facial recognition and DNA analysis in criminal investigations. But analyses of these systems show the data on which those systems are trained are often biased, leading to unfair outcomes, such as falsely determining that African Americans are more likely to commit crimes than other groups.

AI surveillance around the world

In authoritarian countries, AI systems can directly abet domestic control and surveillance, helping internal security forces process massive amounts of information – including social media posts, text messages, emails and phone calls – more quickly and efficiently. The police can identify social trends and specific people who might threaten the regime based on the information uncovered by these systems.

For instance, the Chinese government has used AI in wide-scale crackdowns in regions that are home to ethnic minorities within China. Surveillance systems in Xinjiang and Tibet have been described as "Orwellian." These efforts have included mandatory DNA samples, Wi-Fi network monitoring and widespread cameras, all connected to integrated data analysis platforms. With the aid of these systems, Chinese authorities have, according to the U.S. State Department, "arbitrarily detained" between 1 and 2 million people.

My research looks at 90 countries around the world with types ranging from closed authoritarian to flawed democracies, including Thailand, Turkey, Bangladesh and Kenya. I have found that Chinese companies are exporting AI surveillance technology to at least 54 of these countries. Frequently, this technology is packaged as part of China's flagship Belt and Road Initiative, which is funding an extensive network of roads, railways, energy pipelines and telecommunications networks serving 60% of the world's population and economies that generate 40% of global GDP.

For instance, Chinese companies like Huawei and ZTE are constructing "smart cities" in Pakistan, the Philippines and Kenya, featuring extensive built-in surveillance technology. For example, Huawei has outfitted Bonifacio Global City in the Philippines with high-definition internet-connected cameras that provide "24/7 intelligent security surveillance with data analytics to detect crime and help manage traffic."

Hikvision, Yitu and SenseTime are supplying state-of-the-art facial recognition cameras for use in places like Singapore – which announced the establishment of a surveillance program with 110,000 cameras mounted on lamp posts around the city-state. Zimbabwe is creating a national image database that can be used for facial recognition.

However, selling advanced equipment for profit is different than sharing technology with an express geopolitical purpose. These new capabilities may plant the seeds for global surveillance: As governments become increasingly dependent upon Chinese technology to manage their populations and maintain power, they will face greater pressure to align with China's agenda. But for now it appears that China's primary motive is to dominate the market for and make lots of money in the process.

AI and disinformation

In addition to providing surveillance capabilities that are both sweeping and fine-grained, AI can help repressive governments manipulate available information and spread disinformation. These campaigns can be automated or automation-assisted, and deploy hyper-personalized messages directed at – or against – specific people or groups.

AI also underpins the technology commonly called "deepfake," in which algorithms create realistic video and audio forgeries. Muddying the waters between truth and fiction may become useful in a tight election, when one candidate could create fake videos showing an opponent doing and saying things that never actually happened.

In my view, policymakers in democracies should think carefully about the risks of AI systems to their own societies and to people living under authoritarian regimes around the world. A critical question is how many countries will adopt China's model of digital surveillance. But it's not just authoritarian countries feeling the pull. And it's also not just Chinese companies spreading the technology: Many U.S. companies, Microsoft included, but IBM, Cisco and Thermo Fisher too, have provided sophisticated capabilities to nasty governments. The misuse of AI is not limited to autocratic states.


Explore further

Microsoft's work with Chinese military university raises eyebrows

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: How artificial intelligence systems could threaten democracy (2019, April 22) retrieved 20 July 2019 from https://phys.org/news/2019-04-artificial-intelligence-threaten-democracy.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
113 shares

Feedback to editors

User comments

Apr 22, 2019
As we see here, most writers on this topic fail to appreciate the reality that such developments are inevitable components of an evolutionary process. The "Internet of sensors" which is essentially described complements "the Internet of effectors" and the "Internet of memory".
It is almost always overlooked that this entity as a whole has a close correspondence to that of biological organisms in general and our species in particular.Moreover, contrary to our naturally anthropocentric intuitions, it is not directly of our making. Viewed objectively, is is rather the present manifestation of an autonomous evolutionary process taking place in the medium of shared human imagination which can be traced back some two million years.
And which can be very safely be extrapolated to predict the emergence from the IoT of a new predominant cognitive entity in the not too distant future. Our objective should be to ensure a benign symbiotic relationship.

Apr 23, 2019
"As my research shows, the advent of digital repression is profoundly affecting the relationship between citizen and state..."

-The author appears to be afraid he will lose the ability to lie, cheat, and steal, a common fear among tribalists. This is the real danger AI poses to competition-driven systems like democracy and capitalism - they rely on the thrill of intertribal conflict.

By providing instant access to valid facts, AI will make deceiving the competition far more difficult. When the truth becomes clear and one side is obviously right while the other is wrong, there is no longer the need for politics.

AI will eventually be incorruptible. But until then tribalists and psychopaths will be striving to use it for their own selfish ends. Competition among AI will only serve to improve it to the point where it is incorruptible. Building deception into a machine that has instant access to facts will be beyond the capability of humans.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more