Advertisement

Hitting the Books: Tech can't fix what's broken in American policing

PredPol, Clearview AI and the false promise of precognitive crime fighting.

Stephen Maturen via Getty Images

It's never been about safety as much as it has control, serving and protecting only to the benefit of the status quo. Clearview AI, PredPol, Shotspotter, they're all Carolyn Bryant Donham's testimony behind a veneer of technological validity — a shiny black box to dazzle the masses while giving the police yet another excuse to fatally bungle their search warrants. In More than a Glitch, data journalist and New York University Associate Professor of Journalism Dr. Meredith Broussard, explores how and why we thought automating aspects of already racially-skewed legal, banking, and social systems would be a good idea. From facial recognition tech that doesn't work on dark-skinned folks to mortgage approval algorithms that don't work for dark-skinned folks, Broussard points to a dishearteningly broad array of initiatives that done more harm than good, regardless of their intention. In the excerpt below, Dr. Broussard looks at America's technochauavnistic history of predictive policing.

black background with white, then yellow, stacked text of the title, sub title and author
MIT Press

Excerpted from More than a Glitch: Confronting Race, Gender, and Ability Bias by Meredith Broussard. Reprinted with permission from The MIT Press. Copyright 2023.


Predictive policing comes from the “broken windows” era of policing and is usually credited to William Bratton, former New York City police commissioner and LAPD chief. As NYC police commissioner, Bratton launched CompStat, which is perhaps the best-known example of data-driven policing because it appeared as an antagonist called “Comstat” on season three of HBO’s The Wire. “CompStat, a management model linking crime and enforcement statistics, is multifaceted: it serves as a crime control strategy, a personnel performance and accountability metric, and a resource management tool,” writes sociologist Sarah Brayne in her book Predict and Surveil. “Crime data is collected in real time, then mapped and analyzed in preparation for weekly crime control strategy meetings between police executives and precinct commanders.” CompStat was widely adopted by police forces in major American cities in the 1990s and 2000s. By relying heavily on crime statistics as a performance metric, the CompStat era trained police and bureaucrats to prioritize quantification over accountability. Additionally, the weekly meetings about crime statistics served as rituals of quantification that led the participants to believe in the numbers in a way that created collective solidarity and fostered what organizational behaviorists Melissa Mazmanian and Christine Beckman call “an underlying belief in the objective authority of numbers to motivate action, assess success, and drive continuous organizational growth.” In other words: technochauvinism became the culture inside departments that adopted CompStat and other such systems. Organizational processes and controls became oriented around numbers that were believed to be “objective” and “neutral.” This paved the way for the adoption of AI and computer models to intensify policing—and intensify surveillance and harassment in communities that were already over-policed.

Computer models are only the latest trend in a long history of people imagining that software applied to crime will make us safer. In Black Software, Charlton McIlwain traced the history of police imagining that software equals salvation as far back as the 1960s, the dawn of the computational era. Back then, Thomas J. Watson, Jr., the head of IBM, was trying to popularize computers so more people would buy them. Watson had also committed (financially and existentially) to the War on Poverty declared by President Lyndon Johnson upon his election in 1964. “Watson searched for opportunities to be relevant,” McIlwain writes. “He said he wanted to help address the social ills that plagued society, particularly the plight of America’s urban poor... He didn’t know what he was doing.”6 Watson wanted to sell computers and software, so he offered his company’s computational expertise for an area that he knew nothing about, in order to solve a social problem that he didn’t understand using tools that the social problem experts didn’t understand. He succeeded, and it set up a dynamic between Big Tech and policing that still persists. Software firms like Palantir, Clearview AI, and PredPol create biased targeting software that they label “predictive policing,” as if it were a positive innovation. They convince police departments to spend taxpayer dollars on biased software that ends up making citizens’ lives worse. In the previous chapter, we saw how facial recognition technology leads police to persecute innocent people after a crime has been committed. Predictive policing technology leads police to pursue innocent people before a crime even takes place.

It’s trIcky to write about specific policing software because what Chicago’s police department does is not exactly the same as what LAPD or NYPD does. It is hard to say exactly what is happening in each police agency because the technology is changing constantly and is being deployed in different ways. The exact specifications tend to be buried in vendor contracts. Even if a police department buys software, it is not necessarily being used, nor is it being used in precisely the way it was intended. Context matters, and so does the exact implementation of technology, as well as the people who use it. Consider license plate readers, which are used to collect tolls or to conduct surveillance. Automated license plate readers used by a state transportation authority to automatically collect tolls is probably an acceptable use of AI and automated license plate reader technology—if the data is not stored for a long time. The same license plate reader tech used by police as part of dragnet surveillance, with data stored indefinitely, is problematic.

Every time the public has become aware of some predictive policing measure, controversy has erupted. Consider the person-based predictive policing enacted by the Pasco County Sheriff’s Office in Florida, which created a watchlist of people it considered future criminals. Tampa Bay Times reporters Kathleen McGrory and Neil Bedi won a Pulitzer for their story about how the Pasco County Sheriff’s Office generated lists of people it considered likely to break the law. The list was compiled by using data on arrest histories and unspecified intelligence, coupled with arbitrary decisions by police analysts. The sheriff’s department sent deputies to monitor and harass the people on the watchlist. Often, the deputies lacked probable cause, search warrants, or evidence of a crime. In five years, almost 1,000 people were caught up in the systematic harassment labeled “Intelligence-Led Policing.” Notably, a large percentage of the people on the watchlist were BIPOC.

The Pasco program started in 2011, shortly after Chris Nocco took office as sheriff. Nocco came up with the idea to “reform” the department with data-driven initiatives. “For 10 years, nobody really understood how this worked, and the public wasn’t aware of what was going on,” said Bedi, explaining the reporting project.8 The sheriff built a “controversial data-driven approach to policing. He also built a wide circle of powerful friends,” including local and national politicians, who didn’t question his actions.

The harassment didn’t stop there, however. Separately, the Sheriff’s Office created a list of schoolchildren it considered likely to become future criminals. The office gathered data from local schools, including protected information like children’s grades, school attendance records, and child welfare histories. Parents and teachers were not told that children were designated as future criminals, nor did they understand that the students’ private data was being weaponized. The school system’s superintendent initially didn’t realize the police had access to student data, said Kathleen McGrory.

Once the investigation was published, civil liberties groups denounced the intelligence programs. Thirty groups formed a coalition to protest, and four of the targeted people brought lawsuits against the agency. Two bills were proposed to prevent this kind of invasion and misuse in the future. The federal Department of Education opened an investigation into the data sharing between the Sheriff’s Office and the local school district. Fortunately, as a result, police analysts will no longer have access to student grades.

Many people imagine that using more technology will make things “fairer.” This is behind the idea of using machines instead of judges, an idea that surfaces periodically among lawyers and computer scientists. We see it in the adoption of body-worn cameras, an initiative that has been growing since LAPD officers brutally assaulted Rodney King in 1991 and the attack was captured on a home camcorder. There’s an imaginary world where everything is captured on video, there are perfectly fair and objective algorithms that adjudicate what happens in the video feed, facial recognition identifies bad actors, and the heroic police officers go in and save the day and capture the bad guys. This fantasy is taken to its logical conclusion in the film Minority Report, where Tom Cruise plays a police officer who arrests people before they commit crimes, on the recommendation of some teenagers with precognition who are held captive in a swimming pool full of goo. “It’s just like Minority Report,” a police officer marveled to sociologist Sarah Brayne, when the two were discussing Palantir’s policing software.

What makes this situation additionally difficult is the fact that many of the people involved in the chain are not malevolent. For example, my cousin, who is white, was a state police officer for years. He’s wonderful and kind and honest and upstanding and exactly the person I would call on if I were in trouble. He and his family are very dear to me and I to them. I believe in the law, and I believe in law enforcement in the abstract, in the way that many people do when they have the privilege of not interacting with or being targeted by law enforcement or the courts.

But the origins of policing are problematic for Black people like me, and the frequency of egregious abuses by police is out of control in today’s United States. Police technology and machine fairness are the reasons why we need to pause and fix the human system before implementing any kind of digital system in policing.

The current system of policing in the United States, with the Fraternal Order of Police and the uniforms and so on, began in South Carolina. Specifically, it emerged in the 1700s in Charleston, South Carolina, as a slave patrol. “It was quite literally a professional force of white free people who came together to maintain social control of black, enslaved people living inside the city of Charleston,” said ACLU Policing Policy Director Paige Fernandez in a 2021 podcast. “They came together for the sole purpose of ensuring that enslaved black people did not organize and revolt and push back on slavery. That is the first example of a modern police department in the United States.” In her book Dark Matters: Surveillance of Blackness, scholar Simone Brown connects modern surveillance of Black bodies to chattel slavery via lantern laws, which were eighteenth-century laws in New York City requiring Black or mixed-race people to carry a lantern if out at night unaccompanied by a white person. Scholar Josh Scannell sees lantern laws as the precedent for today’s policy of police using floodlights to illuminate high-crime areas all night long. People who live in heavily policed neighborhoods never get the peaceful cloak of darkness, as floodlights make it artificially light all night long and the loud drone of the generators for the lights makes the neighborhood noisier.

The ACLU’s Fernandez draws a line from slave patrols maintaining control over Black people to the development of police departments to the implementation of Jim Crow–era rules and laws to police enforcing segregation during the civil rights era and inciting violence against peaceful protestors to escalating police violence against Black and Brown people and leading to the Black Lives Matter movement. Fernandez points out that the police tear-gassed and pepper-sprayed peaceful protestors in the summer of 2020, fired rubber bullets at protestors, charged at protestors, and used techniques like kettling to corner protestors into closed spaces where violence could be inflicted more easily.

The statistics paint a grim picture. “Black people are 3.5 times more likely than white people to be killed by police when Blacks are not attacking or do not have a weapon. George Floyd is an example,” writes sociologist Rashawn Ray in a 2020 Brookings Institute policy brief about police accountability.14 “Black teenagers are 21 times more likely than white teenagers to be killed by police. That’s Tamir Rice and Antwon Rose. A Black person is killed about every 40 hours in the United States. That’s Jonathan Ferrell and Korryn Gaines. One out of every one thousand Black men can expect to be killed by police violence over the life course. This is Tamir Rice and Philando Castile.” When Derek Chauvin, the police officer who killed George Floyd, was found guilty, it was remarkable because police are so rarely held accountable for violence against Black and Brown bodies.

Reform is needed. That reform, however, will not be found in machines.

If you buy something through a link in this article, we may earn commission.