Using algorithms to determine sentencing may reduce length of prison sentences

prison
Credit: Unsplash/CC0 Public Domain

American prisons and jails currently hold more than 2 million people—many of them jailed while awaiting trial or serving extremely long prison sentences. New research by Professor Christopher Slobogin, who holds a Milton R. Underwood Chair in Law at Vanderbilt Law School, indicates that a risk-prediction algorithm could help reduce those numbers.

"We have a huge incarceration problem in this country, but none of the current solutions work," he said. "We can use algorithms to help figure out who poses a danger to the community if they're released."

The United States currently incarcerates 0.6 percent of its population—a rate six times higher than in European countries.

"Research shows that measures like decriminalization and elimination of mandatory minimum sentences barely made a dent in the incarceration rate," Slobogin said. "That said, the public won't buy any reform unless you can assure them of their safety."

An ideal would indicate the probability that a given individual would commit a serious crime during a given time period, in the absence of a particular intervention.

In newly published research, Slobogin explained that by making criminal punishment decisions more transparent, algorithms could force long overdue reexamination of the purposes and goals of the criminal justice system. He argues that risk assessment algorithms can:

  • help reduce pretrial detention (the likelihood of a someone committing a crime while out on bail is 8 percent) and the length of prison sentences without increasing the risk to the public—a particularly important goal as COVID-19 spreads through penal facilities,
  • mitigate excessively punitive bail and sentencing, which disproportionately affect low-income people and people of color,
  • allocate correctional resources more efficiently and consistently,
  • provide the springboard for evidence-based rehabilitative programs aimed at reducing recidivism by diverting from prison the candidates most likely to succeed.

Calculated risks

Using algorithms to decide the fate of a human life is controversial. Critics claim that algorithms are not effective in identifying who will offend and who will be responsive to rehabilitative efforts. Critics also argue that algorithms can be racially biased, dehumanizing and antithetical to the principles of criminal justice.

Slobogin said that, though the critiques have merit, current methods of predicting risk may be worse. "At least algorithms structure the analysis in a consistent way."

Unstructured decision-making by judges, parole officers and is demonstrably biased and reflexive, he added, and often relies on stereotypes and generalizations that ignore the goals of the justice system. Algorithms can do better, he said, even if only in a limited way, and if they are designed to compensate for the influence of racialized policing and prosecutorial practices.

If algorithms are validated and used proactively during the pretrial process, most people who are arrested "can keep their jobs, keep their families intact, and help their attorney with their defense by helping track down witnesses," Slobogin said. "By using algorithms to inform sentencing, we can release people earlier, which could help them become productive instead of languishing in , where they lose all hope and learn how to be a better criminal."

More information: Christopher Slobogin, Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk, www.cambridge.org/us/academic/ … dence-risk?format=PB

Citation: Using algorithms to determine sentencing may reduce length of prison sentences (2021, July 29) retrieved 18 April 2024 from https://phys.org/news/2021-07-algorithms-sentencing-length-prison-sentences.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Removing human bias from predictive modeling

86 shares

Feedback to editors