Last month, the city of San Francisco decided to ban the use of facial recognition technology in its various government agencies. While a complete ban in perpetuity likely goes too far, it does reflect a real tension that police use of the technology creates. Cities across the United States will face a similar dilemma as they attempt to balance technology’s promise to improve public services with government’s immense power—and its ability to cause harm. In formulating a coherent policy to govern facial recognition, policymakers should consider the “how,” “when,” and “why” of using such a powerful tool: using appropriate thresholds of confidence for photographs, only utilizing facial recognition after the fact rather than in real time, and limiting its use to the most serious crimes. Finally, governments must consider securing data to avoid breaches of sensitive personal information.
At a basic level, facial recognition technology works by scanning the geometry of a face to recognize key features (such as nose size, eye shape, chin prominence, etc.) and the distances between them. This allows the computer to create a virtual map of a face, which it can then match against other scanned faces in its database with a corresponding confidence threshold. Currently, facial recognition is most commonly used in consumer technology applications like unlocking smartphones or categorizing your Facebook and Google photos.
While these applications offer convenience for consumers, government use of the same technology poses some difficult questions—particularly in the criminal justice context. Unlike private entities, government agencies are not subject to competition, and their mistakes typically have further-reaching consequences. It’s one thing for facial recognition to misidentify a friend on Facebook; it is quite another for it to misidentify a suspect. Moreover, the technology is still maturing and the implementation details can be tricky. Thus, it’s sensible to demand a high level of accuracy and some commonsense safeguards be in place before implementing these technologies.
Risks of errors or abuse
One concern with government use of facial recognition is that, like other algorithmic systems, the technology is only as good as the data used to train it. Amazon’s Rekognition software—which law enforcement is already using in Oregon and Orlando—has come under fire for misidentifying people of color at higher rates than it misidentifies whites. Amazon has countered that these disparities begin to dissipate when using its recommended 99% confidence thresholds, but critics point out that police departments are not currently required to use those high standards (indeed, law enforcement agencies have been known to feed celebrity pictures and forensic sketches into facial recognition software to try to get matches).
We know that racial minorities are already involved in the criminal justice system at a disproportionate rate: A study commissioned by the San Francisco district attorney found that while black individuals made up 6% of the city’s population between 2007 and 2014, they made up 41% of arrests. Overreliance on flawed technology—or poor implementation of said technology—risks not only increasing wrongful convictions, but also further exacerbating the racial disparities in the system.
Creating further concern for the citizens of San Francisco is the police’s ability to misuse facial recognition technology within the broader sweep of abuses of state power. Even if the technology can accurately identify faces, advocates worry it could create a surveillance state, increasing arrests for petty crimes without improving public safety. The Chinese government is already using facial recognition to arrest jaywalkers and track Muslim minority groups. If used inappropriately, facial recognition technology can harm a free, democratic society—creating a chilling effect that negatively affect our rights to gather, protest, vote and move about public areas freely.
Tools to serve the public
On the other hand, we should want our government agencies to innovate and adopt the latest tools that will allow them to best serve the public. Facial recognition technology has real potential to help law enforcement catch criminals and improve public safety. For instance, the technology has already helped to identify Jarrod Ramos, a suspect who currently faces five charges for first-degree murder, when he refused to identify himself after police apprehended him. Most citizens would likely be comfortable with this is a use of facial recognition technology. And outside of traditional law enforcement contexts, facial recognition can also be used to authorize government employees at high-security facilities, combat child sex trafficking, and find missing persons.
So while caution is certainly warranted, an outright ban—without any provision that necessitates revisiting the policy—needlessly locks us out of using helpful tools that could assist law enforcement in serious cases when traditional investigation techniques fail.
That said, the use of this technology needs real safeguards; a temporary moratorium on its use while such protections are developed is not out of the question. What might appropriate oversight over law enforcement use of facial recognition technology look like? Matthew Feeney from the Cato Institute lays out some useful suggestions, including a prohibition on real-time capability that would restrict facial recognition to investigations after the fact, which would prevent police body cameras from becoming unrestricted surveillance machines, among other things.
Ensuring that facial recognition is only used after the fact would also allow for third-party review. For example, lawmakers could require a judge or magistrate to review the proposed use of facial recognition in each case, similar to the approval process for search warrants. Additionally, to prevent the use of facial recognition for minor offenses like shoplifting, lawmakers could identify which offenses rise to a level of seriousness that would warrant the privacy intrusion the technology creates. Strong restrictions on the individuals included in facial recognition datasets can also prevent people with parking violations from being swept up in a police dragnet. Ideally, only those with an active arrest warrant should be included in the dataset.
Security and accountability
With the collection of biometric information like that used in facial recognition technology comes the related concern on how such information is protected and stored. Just last week, news broke about a data breach of a Customs and Border Patrol contractor, which exposed thousands of photos of international travelers that had been collected for facial recognition purposes. The risk of personal data falling into the wrong hands is a very real one. To mitigate this risk, government agencies (and their contractors) should be required to meet appropriate cybersecurity hygiene standards. Restrictions on the length of time biometric information can be stored after its collection can also reduce the harm of a breach.
Finally, if cities question their own ability to access the technical performance of these tools, they should consider requiring an external audit via the initial procurement contract. An important check against government misuse or accidental algorithmic bias is the ability of civil society organizations to test these systems independently, verify that the technology works accurately, and ensure that no minority populations are unfairly targeted.
As cities like Oakland and Somerville, and states like Massachusetts, consider similar pieces of legislation, questions also arise as to whether these policies are best instituted at a city or state level. This isn’t the first time cities have tried to lead the way on justice reforms—for example, Atlanta recently banned cash bails (though Georgia legislators tried to pre-empt that policy this legislative session). Local restrictions on police technology, however, are more complicated, since cases can sometimes touch multiple jurisdictions. Ideally, then, there would be consistency across a given state in protecting citizens’ privacy rights so that state troopers and local law enforcement are held to the same standards.
While San Francisco is the first city to ban the governmental use of facial recognition, it likely will not be the last. Cities and states are well within their rights to hit the pause button if they ultimately decide the technology is not ready for public deployment. But they should also pair any delay with a tangible development plan.
A moratorium can be a useful tool for creating space to carefully consider the types of restrictions and protections we want to put in place. But the ultimate goal of a moratorium should be the proper use of these tools, not facilitating a kneejerk reaction to a new technology that looks scary. Public anxiety surrounding this technology is understandable, but striking a balanced approach in the face of such fears will likely pay dividends in the future.
Commentary
What are the proper limits on police use of facial recognition?
June 20, 2019