Sections

Commentary

Data-driven policing’s threat to our constitutional rights

A display of the NYPD ShotSpotter gunfire-detection system is seen in New York March 16, 2015. REUTERS/Shannon Stapleton (UNITED STATES - Tags: CRIME LAW)

Across the country, the tools that power modern police surveillance contribute to cycles of violence and harassment. Predictive policing systems digitally redline certain neighborhoods as “hotspots” for crime, with some systems generating lists of people they think are likely to become perpetrators. These designations subject impacted communities to increased police presence and surveillance that follows people from their homes to schools to work. The typical targets are Black and brown youth, who may also be secretly added to gang databases or asked by school officials to sign “contracts” that prohibit them from engaging in behavior that “could be interpreted as gang-affiliated.” For communities whose lived experience includes being treated as inherently suspicious by police and teachers, increased surveillance can feel like a tool for social control rather than a means of public safety.

The foundation of these practices are what police departments and technology companies call data-driven policing, intelligence-led policing, data-informed community-focused policing or precision policing. While data has always been used to solve crime, these tools go a step further, relying on a fraught premise: that mining information from the past can assist in predicting and preventing future crimes. As the scholar Andrew Guthrie Ferguson has said, “Big-data technology lets police become aggressively more proactive.” But this data can be biased, unreliable, or simply false. Unquestioned reliance on data can hypercharge discriminatory harms from over-policing and the school-to-prison pipeline. Our elected leaders must uncover and dismantle these practices and recognize them for what they are: an attack on our constitutional rights to due process and equal protection under the law.

Biased data

One foundational problem with data-driven policing is that it treats information as neutral, ignoring how it can reflect over-policing and historical redlining. For example, predictive policing systems typically rely on historical crime data in order to make predictions about where crime is likely to occur or the persons likely to be involved. But as a 2019 study by the AI Now Institute illustrates, this data can be “derived from or influenced by corrupt, biased, and unlawful practices,” including racially discriminatory policing practices like stop-and-frisk and the manipulation of crime statistics. Similarly, reliance on data from calls to police provides a limited view of where crime occurs, as it can reflect biased fears around who is dangerous or even intentional animus (such as when a white woman called the police on a Black bird watcher in New York City). Incorporating historical crime data without explicitly addressing the inequalities and selective enforcement that is engrained in this information overlooks the influence of discrimination.

Even information that is not based on police data can reflect social and economic inequalities. In the course of a public records request for information about the New York City Police Department’s predictive policing system, the Brennan Center for Justice obtained communications with technology vendors that proposed relying on data such as educational attainment, the availability of public transportation, and the number of health facilities and liquor licenses in a given neighborhood to predict areas of the city where crime was likely to occur. The NYPD ultimately chose to build its own predictive policing system, but these companies (like PredPol, Keystats, and HunchLab) are common vendors to several police departments. While some of this data might be relevant to the incidence of crime, such information also reflects structural issues like school segregation, neighborhood redlining, and urban blight. Instead of enabling a multifaceted approach to correcting these longstanding problems, predictive policing systems accept the world as it is and generates a one-note solution: more policing.

Unreliable and false data

Data-driven policing can also incorporate unreliable or false information, including into secretive gang databases. In city after city, audits and investigations of gang databases reveal that they are overwhelmingly populated with Black and brown people, that the criteria for inclusion are overbroad, that police invent gangs that do not exist, and that officers falsely claim that individuals admitted they were gang members despite evidence to the contrary.

The manipulation of data has been well-documented in police departments across the United States:

  • In Los Angeles, an, investigation revealed that LAPD officers used field interview cards to regularly claim that individuals self-identified as gang members, despite body camera footage and car recordings that contradicted these assertions. This led the California attorney general to suspend the LAPD’s ability to add names to the statewide gang database, in turn prompting the Los Angeles district attorney to revisit hundreds of cases involving officers that were charged with falsifying evidence.
  • In Chicago, an audit of the police department’s gang database by the city’s inspector general found racial bias and widespread inaccuracies. Nevertheless, information from the database was widely shared with educational agencies, housing agencies, and immigration authorities, posing the risk that it could facilitate individuals’ suspension from school, eviction from their homes, and deportation.
  • During last year’s racial justice protests, the Phoenix Police Department went so far as to try to add protesters to the state’s gang database as supposed members of a nonexistent street gang based on an activist rallying cry: ACAB (All Cops Are Bastards). This brazen action prompted an independent investigation by the City Manager’s Office, which found the action was based on “dubious Grand Jury Testimony and deeply flawed (according to the Superior Court of Maricopa County, unconstitutional) legal conclusions.” The Department of Justice is also investigating the Phoenix Police Department for a number of related issues, including discriminatory policing and retaliation for engaging in First Amendment-protected activity.
  • In June, leaked documents revealed that the Washington, D.C., gang database is also riddled with inaccurate data, including on children younger than six years old, and gangs made up by the police based on intersections or geographical landmarks.

Far from providing useful insights, gang databases provide a ready-made way to justify ongoing surveillance, harassment, and police killings that are unlikely to elicit widespread public pushback because the people involved were “known gang members.” The extent to which information from gang databases is fed into predictive policing systems is unknown, but the flawed data is used by other police systems such as data analytics or link analysis tools that attempt to trace connections between people. It is unacceptable for law enforcement to using inaccurate and fabricated data to power investigations and resource deployment.

Data-driven harassment

Data-driven policing’s vast web of surveillance facilitates harassment that can affect entire families. In Pasco County, Florida, an “intelligence-led policing” program trained officers to look for factors that could indicate a minor is “destined to a life of crime.” Among the risk factors were “poor rearing as a child,” “poor school record,” “hanging around in public,” being “socioeconomically deprived,” “antisocial behavior,” and “being a victim of a crime.” These criteria, often linked to poverty, could tag minors as fated to become criminals. Pasco County runs a separate program that creates a list of people they believe are likely to break the law, based on the county’s interpretation of various data inputs such as their connections to particular people and criminal records, as well as other secret inputs. In one family’s case, being a target of this program resulted in police showing up at their home multiple times a day, banging on windows, and regularly ransacking their property. When the family resisted, the police retaliated by issuing citations for minor infractions like overly long grass and missing numbers on a mailbox. Eventually, the family left their home and sued the police. In the words of one officer, that was the program’s objective: “make their lives miserable until they move or sue.”

Over the summer, the Pasco County Sheriff began sending letters to people that are flagged as potentially involved in future crime under the department’s “Prolific Offender Program,” informing them that their names and criminal records are being shared with agencies ranging from the FBI to the Department of Homeland Security to the United States Attorney’s Office. The sheriff’s letters do not offer any information about why the recipient was included, how they can petition for removal, or the specific consequences of being on this list, other than a vague warning of the “highest level of accountability for all current and future criminal acts you commit.” Instead, these letters function as a cold and intimidating warning that recipients are being watched and have already been predetermined to pose a threat.

Not only does data-driven policing continue over-policing of marginalized communities, it may also be a catalyst for reinforcing cycles of violence. In Chicago, a predictive policing program intended to identify people who are likely to be involved in a shooting—and prevent violence from taking place—appears to have had the opposite outcome, playing a role in causing shooting to take a place. There, a man was identified “at risk” of being “party to violence” based on where he lived and his relationships with people involved in crime. The system did not specify whether he would be a perpetrator or a victim, but as a result of his being flagged, police officers were regularly dispatched to keep tabs on him at home and at work. He believed that ongoing interactions with police made his neighbors suspicious that he was acting as an informant, ultimately leading other members of his community to shoot at him on two separate occasions. Thus, far from acting as mechanisms to interrupt or prevent violence, these tools can push people further into the margins.

At other times, surveillance can be a precursor to deadly encounters with police. In one instance, a ShotSpotter alert prompted the dispatch of Chicago police to a scene involving a 21-year-old man and a 13-year-old boy, Adam Toledo; one of the officers ultimately shot Adam, who was unarmed at the time of the shooting. A Vice investigation into ShotSpotter technology revealed that in a separate murder investigation, company analysts took sounds initially marked as fireworks in one location and reclassified them as gunshots detected at a murder scene that occurred nearly a mile away from the initial detection. It appears that these changes may have been at the request of Chicago Police. A subsequent investigation by the Associated Press found that “ShotSpotter employees can, and often do” change sounds picked up by the sensors. In August, the Chicago Inspector General issued a damning report evaluating the CPD’s use of ShotSpotter, finding that the technology “rarely leads to evidence of gun-related crimes,” and that the technology has altered the way Chicago Police “perceive and interact with individuals present in areas where ShotSpotter alerts are frequent.” The report also confirmed community concerns about the discriminatory impact of this technology, finding several instances of police officers pointing to the frequency of ShotSpotter alerts in a given area as a reason for stops and “protective pat downs.”

Under the guise of “unbiased” data, modern policing undermines fundamental constitutional rights. Contrary to the guarantees that people are innocent until proven guilty, that police questioning is limited by reasonable suspicion and probable cause, and that the government provides people the opportunity to challenge evidence used against them, data-driven policing leverages secret watchlists and proprietary algorithms to label people as inherently suspicious and worthy of ongoing monitoring. Impacted people may never know they are on government watchlists and may have no ability to challenge the determinations made by police. Preserving a society governed by due process and the rule of law requires addressing and remedying practices that subject marginalized communities to policing that smacks of authoritarianism.

Moving forward

Over the past year, historic protests across the country called for systemic changes in law enforcement’s relationship with communities of color. Pushed by local organizing efforts, the police department in Los Angeles scrapped its predictive policing system. Santa Cruz, California, became the first city to ban predictive policing, and jurisdictions from Portland, Oregon, to Cook County, Illinois, began taking steps to erase their gang databases. These initial victories are just the start: More jurisdictions must take steps to uncover and undo the ongoing harms facilitated by the tools and tactics of data-driven policing. At a minimum, it is time to scrap reliance on predictive policing technologies and gang databases.

As part of this process, legislators should also reject ongoing attempts by technology vendors to rebrand their systems as something that is more palatable or that misappropriate demands from advocates. As part of PredPol remodeling itself as Geolitica, the company is reframing its forecasting software as a tool for greater police transparency and community accountability. Similarly, HunchLab (now owned by ShotSpotter) is attempting to revamp its image by expanding its predictive algorithms to spot areas where social workers and mental health providers are needed.

These efforts should be seen for what they are: corporate makeovers and attempts to tinker at the margins. There is no evidence that predictive policing or gang databases have been or can be cleaned of bias.  While there are important efforts to uncover and mitigate the ways that racial bias can infect machine learning, there is no excuse for continued funding and deployment of policing tools whose only consistent track record is a string of scandals. Adopting a wait-and-see approach treats the disparate impacts endured by communities of color as an acceptable tradeoff for technological process. In fact, there is no indication that communities are asking for “solutions” that supplement biased police with biased machines. Only decisive steps to dismantle the predictive systems themselves and remedy the harms wrought by data-driven policing will steer us toward building a more equitable society. Legislators should consider whether money allocated to discriminatory police technology would be better spent investing in social services to facilitate equal access to education, housing, transportation, and other life essentials.

Ángel Díaz is a lecturer in Law at UCLA School of Law and was previously a counsel in the Liberty & National Security Program at the Brennan Center for Justice.