The senators requested a response to their letter from the Justice Department by March 1.

The senators requested a response to their letter from the Justice Department by March 1. Douglas Sacha/Getty Images

Senators demand that the Justice Department halt funding to predictive policing programs

Investigations have shown how a predictive policing algorithm was both discriminatory and inaccurate.

A group of seven Democratic members of Congress has issued a public letter demanding the Justice Department stop issuing grants to fund predictive policing projects, unless the agency “can ensure that grant recipients will not use such systems in ways that have a discriminatory impact.”

“Mounting evidence indicates that predictive policing technologies do not reduce crime. Instead, they worsen the unequal treatment of Americans of color by law enforcement,” reads the letter, which was first reported by Wired journalist Dell Cameron. 

Cameron was part of a joint effort between The Markup and Gizmodo that published an investigation in 2021 showing how a predictive policing algorithm developed by a company called Geolitica disproportionately directed officers to patrol marginalized communities almost everywhere it was used. 

“Predictive policing systems rely on historical data distorted by falsified crime reports and disproportionate arrests of people of color,” the letter continues. “As a result, they are prone to over-predicting crime rates in Black and Latino neighborhoods while under-predicting crime in white neighborhoods. The continued use of such systems creates a dangerous feedback loop: biased predictions are used to justify disproportionate stops and arrests in minority neighborhoods, which further biases statistics on where crimes are happening.”

The idea behind predictive policing is that by feeding historical crime data into a computer algorithm, it's possible to determine where crime is most likely to occur, or who is most likely to offend. Law enforcement officials can then make proactive interventions, like conducting patrols in predicted crime locations, ideally stopping crime before it occurs.

However, a subsequent Markup investigation into Geolitica’s algorithm found that less than one percent of its predictions aligned with a crime later reported to police. Geolitica shut down operations last year. 

An investigation by the Tampa Bay Times into a Florida Sheriff’s Office’s use of a person-based predictive policing program found that software was used as the basis for a campaign of intimidation and harassment of families its system identified as being likely to commit crime at some point in the future. 

The letter highlights the federal government’s role in funding predictive policing programs—specifically through the Justice Department’s Edward Byrne Memorial Justice Assistance Grant Program—and called for a thorough audit of all grants the agency has issued for predictive policing technology dating back over a decade. 

When The Markup asked the DoJ in 2021 which of its grantees had used money for predictive policing, the agency identified police departments in Newark, N.J. and Alhambra, Calif. In 2022, following Congressional demands for more transparency, Justice officials admitted that the agency “does not have specific records” of how many of the law enforcement agencies that received its grants used that money for predictive policing.

Last October, President Biden issued an expansive executive order on the use of artificial intelligence systems, including predictive policing. That order directed Attorney General Merrick Garland to submit a report to the White House “that addresses the use of AI in the criminal justice system, including any use in… crime forecasting and predictive policing, including the ingestion of historical crime data into AI systems to predict high-density “hot spots.”

The senators’ letter urged the DoJ, as part of its presidentially mandated report, to conduct its own analysis of the accuracy and biases of predictive policing systems, as well as include recommendations for ways this type of technology can be used to “enhance public safety without having discriminatory impacts.”

One example of a different framework for using predictive modeling of crime hot spots is risk-terrain modeling. Developed by researchers at Rutgers University, the method combines data about where crime is mostly likely to occur with land use information to identify the environmental factors behind why crime tends to cluster in certain locations. In Newark, N.J., risk-terrain modeling was used to identify city-owned abandoned properties and vacant lots that attracted crime, which were then prioritized for the development of public parks or affordable housing.

The senators requested a response to their letter from the DoJ by March 1.

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.