Predictive policing algorithms are racist. They need to be dismantled. 您所在的位置:网站首页 perspetuate Predictive policing algorithms are racist. They need to be dismantled.

Predictive policing algorithms are racist. They need to be dismantled.

2023-08-24 04:16| 来源: 网络整理| 查看: 265

Risk assessments have been part of the criminal justice system for decades. But police departments and courts have made more use of automated tools in the last few years, for two main reasons. First, budget cuts have led to an efficiency drive. “People are calling to defund the police, but they’ve already been defunded,” says Milner. “Cities have been going broke for years, and they’ve been replacing cops with algorithms.” Exact figures are hard to come by, but predictive tools are thought to be used by police forces or courts in most US states. 

The second reason for the increased use of algorithms is the widespread belief that they are more objective than humans: they were first introduced to make decision-making in the criminal justice system more fair. Starting in the 1990s, early automated techniques used rule-based decision trees, but today prediction is done with machine learning.

protestors in Charlotte, NC kneel for George FloydCLAY BANKS VIA UNSPLASH

Yet increasing evidence suggests that human prejudices have been baked into these tools because the machine-learning models are trained on biased police data. Far from avoiding racism, they may simply be better at hiding it. Many critics now view these tools as a form of tech-washing, where a veneer of objectivity covers mechanisms that perpetuate inequities in society.

“It's really just in the past few years that people’s views of these tools have shifted from being something that might alleviate bias to something that might entrench it,” says Alice Xiang, a lawyer and data scientist who leads research into fairness, transparency and accountability at the Partnership on AI. These biases have been compounded since the first generation of prediction tools appeared 20 or 30 years ago. “We took bad data in the first place, and then we used tools to make it worse,” says Katy Weathington, who studies algorithmic bias at the University of Colorado Boulder. “It's just been a self-reinforcing loop over and over again.”

Things might be getting worse. In the wake of the protests about police bias after the death of George Floyd at the hands of a police officer in Minneapolis, some police departments are doubling down on their use of predictive tools. A month ago, New York Police Department commissioner Dermot Shea sent a letter to his officers. “In the current climate, we have to fight crime differently,” he wrote. “We will do it with less street-stops—perhaps exposing you to less danger and liability—while better utilizing data, intelligence, and all the technology at our disposal ... That means for the NYPD’s part, we’ll redouble our precision-policing efforts.”

Police like the idea of tools that give them a heads-up and allow them to intervene early because they think it keeps crime rates down, says Rashida Richardson, director of policy research at the AI Now Institute. But in practice, their use can feel like harassment. Researchers have found that some police departments give officers “most wanted” lists of people the tool identifies as high risk. This first came to light when people in Chicago reported that police had been knocking on their doors and telling them they were being watched. In other states, says Richardson, police were warning people on the lists that they were at high risk of being involved in gang-related crime and asking them to take actions to avoid this. If they were later arrested for any type of crime, prosecutors used the prior warning to seek higher charges. “It's almost like a digital form of entrapment, where you give people some vague information and then hold it against them,” she says.

"It's almost like a digital form of entrapment."

Similarly, studies—including one commissioned by the UK government’s Centre for Data Ethics and Innovation last year—suggest that identifying certain areas as hot spots primes officers to expect trouble when on patrol, making them more likely to stop or arrest people there because of prejudice rather than need. 

Another problem with the algorithms is that many were trained on white populations outside the US, partly because criminal records are hard to get hold of across different US jurisdictions. Static 99, a tool designed to predict recidivism among sex offenders, was trained in Canada, where only around 3% of the population is Black compared with 12% in the US. Several other tools used in the US were developed in Europe, where 2% of the population is Black. Because of the differences in socioeconomic conditions between countries and populations, the tools are likely to be less accurate in places where they were not trained. Moreover, some pretrial algorithms trained many years ago still use predictors that are out of date. For example, some still predict that a defendant who doesn’t have a landline phone is less likely to show up in court.



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有