This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:


trusted source

written by researcher(s)


The use of technology in policing should be regulated to protect people from wrongful convictions, says researcher

police body camera
Credit: Pixabay/CC0 Public Domain

The proliferation of technology for everyday living can be seen through ChatGPT writing term papers or robots serving meals at a restaurant.

Technology can also be used towards less utilitarian ends. Unfortunately, deepfakes—digitally altered images of people—can be used to spread misinformation.

A new edited volume, which I co-edited, considers the use of everyday technologies in the criminal justice system, ranging from detecting deception to web sleuthing to help law enforcement solve crime.

Technology and policing

Consider the use of body-worn cameras by , as in the fatal shooting of Ontario Provincial Police Const. Greg Pierzchala in December 2022. Footage from his body camera will provide evidence during the trial of his accused killers.

Police investigations have also been aided by private citizen sleuths via technology, who gather evidence to help police identify criminals. This was the case with convicted murderer Luka Magnotta, where an online network identified him in cat torture videos and provided the information to law enforcement agencies.

Another use of technology can be for public surveillance for crime prevention through the application of .

Security cameras are now a ubiquitous feature in public places. In 2021, it was estimated that one billion security cameras were being used around the world. China is listed as having about 54% of all surveillance cameras.

In 2020, Toronto had approximately 2,000 cameras at city-owned facilities.

Security cameras may or may not be used in conjunction with facial recognition software.

Finding faces

Facial recognition uses software to identify or confirm someone's identity using an image of their face. Captured faces are compared to a database, often for the purposes of crime prevention.

Some retailers have used facial recognition to help reduce theft. In 2022, Josh Soika, an Indigenous man, was confronted by a security guard due to being "flagged" as having stolen previously from the store. Later, it was determined that Soika was misidentified by the artificial intelligence (AI) used by Canadian Tire for facial recognition.

CBS Detroit interviews researcher Dorothy Roberts about Porcha Woodruff’s misidentification due to facial recognition technology.

In 2023, Canadian Tire Corporation and its dealers have since agreed to no longer use facial recognition technology.

In the United States recently, the Federal Trade Commission (FTC) banned the pharmacy chain Rite Aid for five years from using facial recognition software to identify customers who have stolen merchandise or displayed other problematic behaviors. In some instances, Rite Aid workers would follow "identified" customers around, accuse them of stealing and call police. People of color were falsely identified at a greater rate than white customers.

It is important to note that someone who has shoplifted in the past isn't necessarily planning to shoplift again.

The use of facial recognition software in Canada is controversial. In 2021, it was reported that Toronto police used Clearview AI, a facial recognition software, in 84 investigations, with at least two cases proceeding to prosecution. Once it was discovered by the police chief however, the practice was stopped.

Discrimination and AI

Accuracy rates with facial recognition software are above 90%, but that number is greatly reduced within certain demographics. Facial recognition software is documented to misidentify women, racialized people and those between the ages of 18 and 30 years, with accuracy reduced to 35%.

In February 2023, Porcha Woodruff, a 32-year-old pregnant Black woman from Detroit, was arrested for robbery and carjacking based on a facial recognition match. Police used AI that had run an image of a carjacker caught on video through a mugshot database that contained Woodruff's photo, and incorrectly matched it.

Woodruff was jailed for 11 hours and went into labor. The charges were dropped, and Woodruff is currently suing the city of Detroit and the Detroit Police Department.

Consequences of misidentification

According to the U.S.-based Innocence Project, more than 70% of known are due to mistaken identification by people as a contributing factor. The Canadian Registry of Wrongful Convictions finds approximately a third of their cases involved false identification.

People can show what is known as "own-race bias" when identifying faces; people are more accurate when identifying faces of their own race than other races.

The misidentification of a perpetrator—whether by a human or an AI program—can lead to the same consequences: being charged, prosecuted or wrongfully convicted. Technology, as with humans, isn't always accurate and may succumb to similar biases.

Legislation must keep up to protect people's rights and privacy. As technology evolves, adequate information and full transparency needs to be provided to the public on how, when and where a technology is in use. It also is clear that much more research is needed to better understand the impact of technology on the criminal justice system.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: The use of technology in policing should be regulated to protect people from wrongful convictions, says researcher (2024, February 13) retrieved 20 June 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

US bans pharmacy Rite Aid from facial recognition use


Feedback to editors