Predictive Policing AI Is on the Rise − Making It Accountable to the Public Could Curb Its Harmful Effects

Posted on by

Yves here. This post describes one predictive policing experiment gone awry….and then makes positive noises about one that has not yet started, merely based on it having better principles. Corporate America is awash with lofty value statements not even remotely met in practice.

One finds it hard to imagine how predictive policing could satisfy the requirement of presumption of innocence, or how any warrants issued using predictive policing tools could meet Fourth Amendment standards, which bar unreasonable searches and seizures. New York City’s “stop and frisk” was arguably an early implementation of predictive policing, and was found to be unconstitutional, despite stoping and frisking being permissible if there is a reasonable suspicion of criminal activity. As summarized by the Leadership Council Education Fund:

In 1999, Blacks and Latinos made up 50 percent of New York’s population, but accounted for 84 percent of the city’s stops. Those statistics have changed little in more than a decade. According to the court’s opinion, between 2004 and 2012, the New York Police Department made 4.4 million stops under the citywide policy. More than 80 percent of those stopped were Black and Latino people. The likelihood a stop of an African-American New Yorker yielded a weapon was half that of White New Yorkers stopped, and the likelihood of finding contraband on an African American who was stopped was one-third that of White New Yorkers stopped.

Hopefully lawyers in the commentariat will pipe up. But it seems there are good odds of the continuation of the trend towards “code as law,” where legal requirements are fit to the Procrustean bed of software implementations. That was rife during the foreclosure crisis, where many judges were simply not willing to consider that the new tech of mortgage securitization did not fit will with “dirt law” foreclosure requirements. They chose in many cases to allow foreclosures that rode roughshod over real estate precedents, because they did not want the borrower to get a free house. Keep in mind that that was not what borrowers wanted, but a mortgage modification, which most lenders in the “bank kept the loan” world would have provided, but mortgage servicers were not in the business of making.

By Maria Lungu, Postdoctoral Researcher of Law and Public Administration, University of Virginia. Originally published at The Conversation

The 2002 sci-fi thriller “Minority Report” depicted a dystopian future where a specialized police unit was tasked with arresting people for crimes they had not yet committed. Directed by Steven Spielberg and based on a short story by Philip K. Dick, the drama revolved around “PreCrime” − a system informed by a trio of psychics, or “precogs,” who anticipated future homicides, allowing police officers to intervene and prevent would-be assailants from claiming their targets’ lives.

The film probes at hefty ethical questions: How can someone be guilty of a crime they haven’t yet committed? And what happens when the system gets it wrong?

While there is no such thing as an all-seeing “precog,” key components of the future that “Minority Report” envisioned have become reality even faster than its creators imagined. For more than a decade, police departments across the globe have been using data-driven systems geared toward predicting when and where crimes might occur and who might commit them.

Far from an abstract or futuristic conceit, predictive policing is a reality. And market analysts are predicting a boom for the technology.

Given the challenges in using predictive machine learning effectively and fairly, predictive policing raises significant ethical concerns. Absent technological fixes on the horizon, there is an approach to addressing these concerns: Treat government use of the technology as a matter of democratic accountability.

Troubling History

Predictive policing relies on artificial intelligence and data analytics to anticipate potential criminal activity before it happens. It can involve analyzing large datasets drawn from crime reports, arrest records and social or geographic information to identify patterns and forecast where crimes might occur or who may be involved.

Law enforcement agencies have used data analytics to track broad trends for many decades. Today’s powerful AI technologies, however, take in vast amounts of surveillance and crime report data to provide much finer-grained analysis.

Police departments use these techniques to help determine where they should concentrate their resources. Place-based prediction focuses on identifying high-risk locations, also known as hot spots, where crimes are statistically more likely to happen. Person-based prediction, by contrast, attempts to flag individuals who are considered at high risk of committing or becoming victims of crime.

These types of systems have been the subject of significant public concern. Under a so-called “intelligence-led policing” program in Pasco County, Florida, the sheriff’s department compiled a list of people considered likely to commit crimes and then repeatedly sent deputies to their homes. More than 1,000 Pasco residents, including minors, were subject to random visits from police officers and were cited for things such as missing mailbox numbers and overgrown grass.

Lawsuits forced the Pasco County, Fla., Sheriff’s Office to end its troubled predictive policing program.

Four residents sued the county in 2021, and last year they reached a settlement in which the sheriff’s office admitted that it had violated residents’ constitutional rights to privacy and equal treatment under the law. The program has since been discontinued.

This is not just a Florida problem. In 2020, Chicago decommissioned its “Strategic Subject List,” a system where police used analytics to predict which prior offenders were likely to commit new crimes or become victims of future shootings. In 2021, the Los Angeles Police Department discontinued its use of PredPol, a software program designed to forecast crime hot spots but was criticized for low accuracy rates and reinforcing racial and socioeconomic biases.

Necessary Innovations or Dangerous Overreach?

The failure of these high-profile programs highlights a critical tension: Even though law enforcement agencies often advocate for AI-driven tools for public safety, civil rights groups and scholars have raised concerns over privacy violations, accountability issues and the lack of transparency. And despite these high-profile retreats from predictive policing, many smaller police departments are using the technology.

Most American police departments lack clear policies on algorithmic decision-making and provide little to no disclosure about how the predictive models they use are developed, trained or monitored for accuracy or bias. A Brookings Institution analysis found that in many cities, local governments had no public documentation on how predictive policing software functioned, what data was used, or how outcomes were evaluated.

Predictive policing can perpetuate racial bias.

This opacity is what’s known in the industry as a “black box.” It prevents independent oversight and raises serious questions about the structures surrounding AI-driven decision-making. If a citizen is flagged as high-risk by an algorithm, what recourse do they have? Who oversees the fairness of these systems? What independent oversight mechanisms are available?

These questions are driving contentious debates in communities about whether predictive policing as a method should be reformed, more tightly regulated or abandoned altogether. Some people view these tools as necessary innovations, while others see them as dangerous overreach.

A Better Way in San Jose

But there is evidence that data-driven tools grounded in democratic values of due process, transparency and accountability may offer a stronger alternative to today’s predictive policing systems. What if the public could understand how these algorithms function, what data they rely on, and what safeguards exist to prevent discriminatory outcomes and misuse of the technology?

The city of San Jose, California, has embarked on a process that is intended to increase transparency and accountability around its use of AI systems. San Jose maintains a set of AI principles requiring that any AI tools used by city government be effective, transparent to the public and equitable in their effects on people’s lives. City departments also are required to assess the risks of AI systems before integrating them into their operations.

If taken correctly, these measures can effectively open the black box, dramatically reducing the degree to which AI companies can hide their code or their data behind things such as protections for trade secrets. Enabling public scrutiny of training data can reveal problems such as racial or economic bias, which can be mitigated but are extremely difficult if not impossible to eradicate.

Research has shown that when citizens feel that government institutions act fairly and transparently, they are more likely to engage in civic life and support public policies. Law enforcement agencies are likely to have stronger outcomes if they treat technology as a tool – rather than a substitute – for justice.

Print Friendly, PDF & Email

7 comments

  1. David in Friday Harbor

    I was a deputy public prosecutor in Silicon Valley for 32 years, and I quickly concluded that these algo-driven “predictive policing” models are software sales-hype. Complete garbage. AI is nothing more than potted confirmation bias.

    When one of my now adult children was in college she took a “criminology” course that asked the same question: Why do some people choose to commit crimes? I suggested to her that the better question is: why do the vast majority of people choose not to commit crime? It’s because their economic, emotional, and social needs have been sufficiently met throughout life. They experience empathy for others and see a downside to anti-social behavior.

    The pre-2012 San Jose Police Department didn’t need predictive algorithms. It was staffed by experienced cops who knew their city and its denizens. What happened in 2012 you might ask? “Enron John” Arnold bankrolled an attack on police and fire pensions after the sock-puppet mayor handed the trust funds over to a cabal who grew the exposure to “alts” from 5% to nearly 50%, mostly hedge funds and real estate that cratered after 2008. http://america.aljazeera.com/articles/2014/3/14/san-jose-pensionreform.html

    There was a mass-exodus of experienced cops who understood the city — fully a third of the force. I am told that they never recovered. Now they are pretending to “do something” with garbage AI.

    So it goes.

    Reply
  2. The Infamous Oregon Lawhobbit

    Algorithms that “predict” crime “areas” don’t seem to me to be much more useful than any other data collection tool (such as frequent citizen complaints about, say, a particular intersection’s traffic control devices being frequently ignored, resulting in “directed patrols.”), but likely will be sold to municipalities as crime-destroying wunderwaffe, since “grifters gonna grift.” Worse, municipalities tend to be stocked with people who have very little tech savvy and have little ability to properly evaluate claims of magic as the snake oil they generally are.

    Algorithms “predicting” behaviors by individuals are not only snake oil, but, as Yves points out, very highly problematic from a criminal justice/constitutional rights standpoint. But it would not surprise me to see them get sold more and more, as constitutional rights erode more and more. See further examples at “27 exceptions to the constitutional requirement to obtain a search warrant.”*

    @David: Having worked all sides of the process from intake to court to probation, I’d suggest that there are two categories who do not commit crimes. The first are the majority, who just plain recognize that crime is wrong and that they shouldn’t do it. A smaller portion are afraid of getting caught and the possible punishment resulting therefrom. There is much Venn overlap between the second category and the ones who DO get caught who wouldn’t have “done it” if they knew they were going to be. :-)

    *could be more by now. That’s from decades ago.

    Reply
  3. Terry Flynn

    I am scared for another reason. Imagine you work with the inventor of a method that enables huge improvement in personality identifications/quantification…..and you never knew that the biggest bank in Australia was using it under the radar. It happened to us.

    Friend applied for a relatively low level IT job at MacQuarie. Was profoundly worried about the “psychometric test”. I asked him about it. It was NOT one of those stupid psychometric tests that we worked out how to game in 1996. No, this was a Best-Worst Scaling study, utilising a 13 item Balanced Incomplete Block Design. They MUST have attended one of our courses to learn this but neither my boss (inventor of the method) nor me (biggest teacher of it) had ever encountered a Macquarie bank attendee. Believe me we’d have taken notice if we knew THEY were using it.

    Turns out they were using it to perfectly discriminate between candidates. I don’t know if they wanted an army of clones with same personality profiles, or some mixture…..but the bottom line was that YOU CAN DO THAT WITH BWS. We tell you how to do it in our 2015 book (but this all happened back in around 2012). No AI required. So I get worried that WE as researchers are already WAY behind the curve.

    BTW if you want to know how to apply BWS to personality assessment then Lee & Soutar (University of Western Australia) are the experts. Plenty of publications including how Schwartz (famous of his List of Values) ditched his convoluted scoring and said “just do BWS”.

    Reply
  4. thoughtfulperson

    I was impressed with the new US deportation rules, published by Rubio and shown on links here recently. They said basically you can be deported if they think you might say something they don’t like at some future date. No doubt most people would qualify. So it comes down to enforcement – with those rules everyone is presumed guilty.

    Reply
  5. Alice X

    Here is the Right*: find those who could not reach a level of survival and incarcerate them for private profit.

    Vs the Left*: all humans and their environment deserve an equal basis.

    Alice

    *the in between is on my map, and I look carefully, but I adhere to the Left. I would like to drag the Vichy Left along. Ultimately everyone should come along. We are one of many species.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *