Policing by algorithm: Does it work? Is it fair?

Out of over 50 agencies surveyed, none indicated they had examined effectiveness of 'data-driven' approach.

LAPD patrol car

Predictive policing technology developed by the Los Angeles Police Department has been licensed by more than 50 agencies, but questions remain about its effectiveness. Photo courtesy of John Liu

Predictive policing software is being used by law enforcement agencies nationwide, yet a review suggests that almost none of its users, past or present, have clear ways to measure the effectiveness or accuracy of the tool, a lack of oversight many researchers consider irresponsible.

“I remember the only heads up I really got about PredPol was from my chief coming to our briefing and telling us, ‘We have this great new tool. We’re going to support it. It’s PredPol, and it’s going to reduce crime,’” said Captain Brian Bubar of the San Pablo Police Department, whose 2.4 square-mile California town has fewer than 40,000 people.

Bubar was a San Pablo patrol officer when the department acquired the program in 2013; the department’s subscription ended on July 31, 2015. PredPol, one of the first predictive systems, was created by Los Angeles Police Department and University of California, Los Angeles, and it has been used in dozens of departments now, employing law enforcement data from the previous days and months to generate recommended areas for patrolling.

MuckRock has submitted requests to more than 50 agencies known to have used the technology, asking for information on their predictive policing training and use. Of those that have responded, a few have been able to send their contracts, input datascientific papers written by PredPol’s makers, annual mayoral presentations, and even some training materials, but none have been able to provide validation studies.

“I was never able to explain how it was identifying these boxes, how it was telling us to get to these points, which I think was a huge component [of its failure], because it did not give an opportunity to give the system validity to the officers,” said Bubar. “And that could happen in a department our size. At the time, we didn’t have a dedicated IT infrastructure. We didn’t have a committee to learn how to incorporate our training staff, incorporate the sergeants, to educate them on these new systems, who were able to get the buy-in from the staff to give it a chance. It was just kind of delivered in our laps, and we were told to use it.”

Using data generated by unfair or questionable policing strategies to train a computer system can simply result in automation of that bias.

Though it is sold as “crime prediction” and officer allocation tools for under-resourced agencies, one of the reasons predictive policing tools are so controversial is the contention that they are built on and reinforce racial bias. Using data generated by unfair or questionable policing strategies to train a computer system can simply result in automation of that bias.

“You have to understand that every bias that goes into an instrument’s development is going to be manifest in the tool’s ability. It’s a great concept, but it doesn’t recognize that human behavior is negatively impacted when we use algorithms to predict it. You look at the research: blacks are twice as likely as whites to be targeted for certain behaviors when those behaviors are evenly distributed,” said Howard Henderson, founding director of the Center for Justice Research at Texas Southern University. “You can’t adopt an instrument without validating. That’s unacceptable.”

A report from the AI Now Institute found that some police departments cited by the Department of Justice for unfair policing practices use data from those eras to inform predictive policing systems.

“These tools are not sufficiently well-tested. They’re biased premises that, in some cases, are faulty. They’re based on a process of learning that I think are misapplied. And in the case of person-based predictive policing, they’re based on assumptions and structures that we’ve already seen have huge numbers of problems. I don’t see any reason to keep using them until we know, what is it exactly that they’re trying to do,” said Suresh Venkatasubramanian, professor at the University of Utah.

Many academics argue that, as they’re currently built and used, predictive policing systems shouldn’t be employed by police at all. In early October, more than 400 academics signed onto a letter to the LAPD Commission challenging claims that scholarship supports use of the tool.

“We keep telling ourselves, ‘We’re not stupid. We know correlation does not equal causation.’ But we’re going to use it anyway,” Venkatasubramanian said. “Why are you going to use it anyway?”

In addition to the concerns around encoding biased policies into police equipment, local validation and accuracy checks are important for seeing whether a system can realistically help an agency. Multiple police departments have said they stopped using PredPol because it simply did not work for them, suggesting areas they already knew to be problematic or not offering suggestions that could work with the existing demand created by daily urgent needs for service.

“It became almost offensive to patrol officers when the city commits funding to a new analytical tool that’s supposed to reduce crime. We need to be out in high visibility,” Bubar explained. “These were things that we were already identifying.”

Changes to standard procedure

LAPD, one of the longest users of PredPol, announced only last month that it would begin measuring the effectiveness of its data-driven policing techniques—nearly eight years after it started using them. An April report from the LAPD’s Inspector General found the department needs a lot of improvement around how it captures data and evaluates the fairness of the tool’s application. At a meeting in mid-October of the Los Angeles Police Commission, representatives from the LAPD acknowledged the shortcomings.

To address them, the department will be creating a data-driven policing unit and a reference manual detailing how it is used.

Dozens of other departments still use similar technology, but community pressure is starting to limit deployments.

“I think that’s where the change has occurred. LA and Philly are places where you can see good examples of this, but that’s because of the enterprising nature of the local community that they were able to do this,” said Venkatasubramanian. “To expect or require communities to be the source of pushback is unfair, although, yes, that’s probably the only way it’s going to happen.”

One point at which oversight might be implemented is during the initial procurement or subsequent justification meetings with city councils.

The use of a tool making predictions or decisions about force deployment is akin to implementing a policy stance, says Deirdre K. Mulligan, faculty director of the Berkeley Center for Law and Technology, and as with any policy enacted by law enforcement, there needs to be thought and intention behind its adoption.

“When it comes to predicting crime the underlying data is so fraught that it seems it can’t do anything but play racism forward,” wrote Mulligan. “I’ve been focused on all the policies that are embedded in the ‘tools,’ and the need for agencies to understand that adopting a tool may be akin to adopting a policy and, therefore, requires expertise, reasoned decision making, and public participation. How do we make sure the public is informed and in control of key normative decisions when processes are offloaded to an algorithm?”

This work is licensed under a 

Exit mobile version