Biometric identification systems record immutable personal characteristics in a machine-readable format. When used by governments, they can solve a hard problem: verifying personal identity in a way that cannot be faked. But in doing so, these systems create risks for the people whose data is collected, ranging from how the data is stored to what happens if the collecting agency is not in ultimate possession of the data.
The risks posed by the collection and use of biometric data were disturbingly illustrated by the Taliban take-over of Afghanistan late last year, when anti-government forces seized power and inherited a powerful biometric identification system built by the U.S. military. The Handheld Interagency Identity Detection Equipment (HIIDES) system was designed as a way for U.S. forces to be able to easily identify individuals in the field and tell friend from foe. But in the hands of the Taliban, these systems risked revealing the identities of individuals who had worked with American forces, potentially exposing them to reprisal. An unshakeable identification risked becoming a mechanism for revenge, punishment, or exclusion.
Nearly two decades after HIIDES was first deployed by U.S. forces, the device offers an instructive look at the promises and perils of biometric identification systems, how they can be used and misused, and lessons for efforts to constrain biometric data collection.
HIIDES and keep
In cataloging the immutable physical characteristics of a person—finger and palm prints, iris scans, facial imagery, and DNA—biometric identification systems combine multiple identifiers to produce a more complete, accurate database of identity. U.S. forces began use of biometric identification systems in Kosovo in 2001, with the BAT, or the Biometric Automated Toolset. This device combined a laptop computer, fingerprint scanner, iris reader, and digital camera used in a sit-down setting. The immediate problem BAT attempted to solve was to verify the identity of local hires, to make sure someone deemed a bad hire at one base was kept out of other bases.
HIIDES was first introduced in 2005, and in 2006 HIIDES-maker Viisage won a $10 million contract from the Department of Defense for the hand-held device, on the premise that it would collect multiple kinds of biometric information in one unit, and could do so in the field. The company touted the device “for mobile identification of individuals on the battlefield, at border checkpoints, in airports, in detention centers, and for checking individuals against known watch lists.”
By 2007, the device was in the hands of U.S. forces in Iraq, used to register and check the identities of candidates for police training. Individuals in the HIIDES database would be coded as green or red—green for friendly, red for a potential threat. Individuals could be enrolled in the database in a number of ways, as part of a job application, like with the Iraqi police, or by just passing through a checkpoint where soldiers were ID’ing people with HIIDES. The device could be used by soldiers on patrol, who found it more effective than asking for an identification card. In the chaos of post-invasion Iraq, HIIDES allowed the U.S. military to create from scratch a national identification system—one that could not be deceived by forged paperwork or stolen uniforms. Concerned that bases might be breached by hostile forces obtaining access as local workers, the U.S. military turned to HIIDES to ensure that only authorized and vetted people were allowed access.
As the war on terror marched on, biometrics would play a key role in its administration. Once deployed in Afghanistan, HIIDES was used to confirm whether soldiers on the payroll of the Afghan National Defense Forces were showing up for duty—an effort to eliminate so-called “ghost soldiers.”
The use of biometric data in a national database sits at the center of a scathing look by the MIT Technology Review into the dangers of building such a database, and then having it fall into the hands of a victorious enemy. Building such a database would prove risky. As early as 2016, the Taliban used the government’s biometric system to identify members of Afghan security forces traveling on buses hijacked by the rebel group. When the Taliban took over last year, the database had only grown to include “details on the individuals’ military specialty and career trajectory, as well as sensitive relational data such as the names of their father, uncles, and grandfathers, as well as the names of the two tribal elders per recruit who served as guarantors for their enlistment,” the MIT Technology Review reported.
In using biometrics to identify their enemies, the Taliban had ripped a page straight from the U.S. military’s playbook. “Knowing who belongs in a village—who they are, what they do, to whom they are related, and where they live — all helps to separate the locals from the insurgents.” The Commander’s Guide to Biometrics in Afghanistan, published in April 2011, observes. “Targeting is enhanced through the use of biometrics by positive identification of the target.”
This is the crux of biometric data. In building a tool and a database that can permanently identify every person in that database, the risk lies entirely in how the group in possession of the database uses it.
Knock-on consequences of biometric data collection
The risks specific to the HIIDES device stems from its design requirements. Each HIIDES unit had finite memory capacity, with the data uploaded over secure internet connections to a central database. Storing many identities locally was an adaptation to unreliable or irregular access to internet infrastructure, making the tool useful in the field even when not online. Keeping the threshold for access below the level of classification and mandatory encryption meant that soldiers in the field could input data easily and that data could be combined with other entries without obstacle. It also meant that when the HIIDES unit and its database fell into enemy hands, there was little inherent in the technology to prevent access to the data.
When the use of biometric technologies in the war on terror was being debated, there was little recognition of its risks. “Biometric identifiers are the most secure and convenient way to authenticate and identify people because they cannot be borrowed, stolen, forgotten, or forged,” Sen. Dianne Feinstein said during a November 2001 hearing. “If government does not get involved to provide some order and structure,” Feinstein continued, “then the market will result in a gradual and uneven adoption of biometric identifiers that will continue to leave our country vulnerable to terrorist attack.”
The fail state, as the senator understood it 20 years ago, was not that the collected data would be used to cause harm, but that the implementation would be incomplete and harm would happen anyway. In the intervening two decades, the understanding of harm has instead shifted to what happens once the data is collected, and what rights to privacy people have regarding the collection and use of their biometric data.
Recognizing these harms, localities, nations, and international organizations have moved to restrict biometric data collection, retention, and uses. The European Union’s General Data Protection Regulation, which went into effect in May 2018, prohibits the processing of biometric data for the purpose of uniquely identifying an individual, except for in certain circumstances. The act, which also includes provisions regarding data storage and security, does not prevent companies or governments from collecting biometric data and using it. Instead, it mandates rules for how that data collection can take place, while still preserving the rights of an individual, and mitigating harm to those individuals from any lost or breached data. In 2008, Illinois passed the Biometric Information Privacy Act, becoming the first state to regulate biometric data. The act sets out three principles of biometric data collection: consent from individuals in the collection of the data, the destruction of biometric identifiers in a timely manner, and the secure storage of biometric data while held by companies.
The recent history of biometric identification systems has left most privacy activists convinced that the best way to prevent the harms of such systems is to not collect data in the first place. “Better protections on information and its uses can only go so far,” writes Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation. “In many instances, the only way to ensure that people are not made vulnerable by the misuse of private information is to limit, wherever possible, how much of it is collected in the first place.”
For the United States, a reliance on HIIDES-like systems, or biometrics in general, compounds the risk to allies and local partners in future counterinsurgency wars. Building a database to ensure the fixed identity of allies means creating a future weapon that can be used against them should the United States leave. Yet the benefits HIIDES offered for targeting and security purposes suggest that the military is likely to continue using such technology.
Threats common during counterinsurgency warfare, like attacks disguised as coming from local allies, erode trust between U.S. and local forces and make counterinsurgency operations untenable. Biometrics can mitigate such risks while the United States is still actively involved in the country, but collecting the data and failing to secure it creates a future vulnerability for local allies. Encrypting data in biometric identification devices and systems might make unintended uses of such data more difficult, but encrypting data is ultimately only one aspect of the data-protection regimes required to secure such sensitive data.
Biometric data collection in war zones is designed as an expediency, meeting immediate security needs with the tools at hand. Yet the data collected in the name of an immediate security concern can endure beyond the war, and even decades-long conflicts eventually end. Policymakers looking to mitigate the harm from data collection tools in the future would be wise to look at the existing record of how captured data has been contributed to harm and mandate safeguards.
Kelsey Atherton is a military technology journalist based in Albuquerque, New Mexico. His reporting has appeared in Popular Science, Breaking Defense, and The New York Times.