Sections

Commentary

The dangers of tech-driven solutions to COVID-19

The Care19 mobile app, which the governors of North Dakota and South Dakota have asked residents to download to assist in contact tracing during the global outbreak of the coronavirus disease (COVID-19), is seen on a phone, U.S. April 24, 2020. REUTERS/Paresh Dave

Imagine a world in which governments and tech firms collaborate seamlessly and benevolently to fight the spread of COVID-19. Public-health officials use automated proximity-detection systems to help them with contact tracing, and they carefully protect people’s personal data in order to build and maintain public trust. Social media platforms facilitate the widespread release of government public service announcements, which include clear information about the virus, the disease, and recommended mitigation strategies, and public officials reinforce that information with appropriate responses.

Now consider the world we have, in which governments and firms are responding to the pandemic in a less coordinated, more self-interested fashion. Although few sensible people have anything good to say about the federal government response, reactions to tools for managing the pandemic designed by tech firms have been more mixed, with many concluding that such tools can minimize the privacy and human rights risks posed by tight coordination between governments and tech firms. Contact tracing done wrong threatens privacy and invites mission creep into adjacent fields, including policing. Government actors might (and do) distort and corrupt public-health messaging to serve their own interests. Automated policing and content control raise the prospect of a slide into authoritarianism. 

Recent events around the world and in the United States demonstrate that the threat of a slide into authoritarianism is real. But we think it is also clear that entrenched habits of deferring to private-sector “solutions” to collective problems have undermined our capacity for effective pandemic response. What’s more, failures to hold tech firms accountable for their uses of personal information have actually made us more vulnerable to prolonged, uncontainable outbreaks.

We are not the first to sound alarm bells about the role of platforms in facilitating the public-health response to COVID-19. But most critics have focused narrowly on classic privacy concerns about data leakage and mission creep—especially the risk of improper government access to and use of sensitive data. Apple and Google released an application programming interface (API) to enable apps for proximity tracing and exposure notification tailored to address those criticisms. But that approach fails to address more fundamental obstacles to creating a safe and sustainable system of public-health surveillance, and it also creates new obstacles.

Enshrining platforms and technology-driven “solutions” at the center of our pandemic response cedes authority to define the values at stake and deepens preexisting patterns of inequality in society. It also ignores platforms’ role in fostering and profiting from the disinformation that hobbles collective efforts to safeguard the public’s health. Effective, equitable pandemic response demands deeper, more structural reforms regulating the platforms themselves.

Platform business models magnify the risks of data leakage and mission creep

To understand why platform business models magnify classic privacy concerns, consider two examples that have been in the news lately. An investigation by Jumbo, the maker of a privacy app, recently revealed that the Care19 smartphone app developed for North and South Dakota’s health departments was sharing user information with both Foursquare and Google, even though the app’s privacy policy told users no such sharing would occur. Meanwhile, an investigation by The New York Times into China’s Alipay Health Code, an exposure notification app and population movement control system, revealed that the app shares data with the police.

It is tempting to interpret the Care19 example as illustrating public-sector incompetence and the Alipay Health Code example as illustrating public-sector malice. In both cases, however, the data were collected and shared in ways that are central to the business models of the companies involved.

The Care19 app was developed for the state health departments by ProudCrowd, a private developer of location-based social networking services. And it leaked information to other private entities because location-based people analytics tools are designed to be leaky. ProudCrowd used off-the-shelf software development kits, and the tools worked exactly as their original designers had intended, transmitting the “advertising identifiers” associated with user devices to Foursquare and Google, among others.

Now consider the mission creep problem: the risk that government authorities will adapt automated tools originally intended for contact tracing to enforce quarantine and stay-at-home compliance, as occurred with China’s Alipay Health Code. Many might assume that an automated system designed to restrict the mobility of thousands or even millions of people would only exist in authoritarian countries. But its major components already exist here, too. Consider Google’s community mobility tool, which uses aggregated geolocation data (collected via “advertising identifiers”) to show mobility patterns within communities. Google didn’t need to do much to design that tool because fine-grained geolocation data is at the heart of the ad-based business model. If Google wanted to develop a community mobility app similar to Alipay Health Code to push out to people’s phones, it could easily do so using the granular data about individual mobility that it already has.

Existing health privacy laws compound our vulnerability to classic privacy threats from both government and private actors. The law that protects health information privacy and security, HIPAA, permits disclosure of protected health information to public-health authorities for specified public-health purposes. That’s entirely appropriate, but there is no corresponding law prescribing privacy practices for data collected in the emergency public-health context. Ostensibly, the federal Privacy Act of 1974 and corresponding state laws govern data collection for public-health surveillance, but those laws include broad exceptions for law enforcement activities, and they don’t apply to records collected and maintained by private entities.

Moving the conversation beyond data leakage and mission creep

In theory, the API designed by Apple and Google to facilitate automated exposure notification eliminates concerns about data leakage and mission creep. The companies designed the system to generate nameless identifiers and store them on users’ devices rather than in a centralized database. Also, users control whether to enable the API to begin with (though employers, schools, and other institutions may demand that they do so). Public-health authorities can build apps that give users authorized codes to input when they receive a COVID-19 diagnosis, and they can specify the exposure time needed to trigger a notification to others who have been nearby. But public-health authorities won’t be able to identify specific individuals in the latter group—the API’s design simply precludes it.

So why doesn’t the Apple/Google API pave the path for an optimal government-tech collaboration or at least a good enough solution? We think the focus on data leakage and mission creep misses the forest for the trees. We see three major problems arising from uncritical deference to insufficiently regulated platforms—problems that the Apple/Google API doesn’t begin to solve.

1. Deference to private-sector solutions lets tech firms identify and define the values at stake.

Relying on platforms to develop core parts of the pandemic response lets them define key values. That can be counterproductive. For example, permitting platforms to build their concept of “privacy” into pandemic tech might sound good, but it could create other problems that different philosophies of privacy would be better suited to solve.

Privacy is about more than just anonymity or informational self-determination, but you wouldn’t know that by looking at the design specs of the Apple/Google API. Like most tech firms and even regulators, Google and Apple generally conceive of privacy as giving people control over their data. That’s the animating value behind the “I agree” buttons that secure our consent and the “opt-in” toggle buttons meant to give us choices about and notice of data practices. But consent is a dubious way to justify data practices and control is a narrow way to think about privacy.

Privacy can also exist within a relationship of trust, and public-health surveillance is a case in point. To be effective, public-health surveillance needs to be comprehensive, not opt-in. In a democracy, comprehensive public-health surveillance requires public trust. That’s why countries with stronger data privacy laws provide exceptions allowing public-health surveillance, but then adopt public-health surveillance laws with separate, custom-designed privacy provisions designed to keep collected data siloed and secure. Such laws also typically include limits on data retention—provisions requiring the destruction of public-health surveillance records after the emergency public-health need to retain them no longer exists.

Working from their own definition of privacy, Apple and Google appear to have asked themselves, “What protections are necessary to ensure individual users retain control over their participation in proximity detection and remain anonymous to the government?” But to pave the way for an effective, comprehensive system, builders of public-health surveillance should instead be asking, “What protections are necessary to establish sufficient trust in this system to ease the flow of needed health data to and from users?”

2. Deference to private-sector solutions can ignore or exacerbate preexisting patterns of inequity.

In the United States, the pandemic has played out along racial lines, with higher rates of infection and death in communities of color, and technology-driven solutions to the contact-tracing problem threaten to entrench inequity still further. Government and societal deference to tech firms has privatized the infrastructure proposed to fight COVID, leaving many low-income people and communities of color excluded.

Contact-tracing apps almost certainly will rely on smartphones, but that technology isn’t equally distributed. An estimated 81 percent of American adults own a smartphone, but the 19 percent who don’t make up especially vulnerable segments of society, including a third of those who did not graduate from high school and nearly half of people over 65 (who are among the most at-risk for COVID-19). People of low income are more likely to have older phones with limited functionality and outdated operating systems, and many children have no smartphone at all.

At the same time, there are likely to be clusters of people with greater- or lower-than-average trust in contact-tracing technology, which will require high adoption rates in order to be effective. In particular, communities frequently targeted by police or immigration enforcement have good reason to be wary of the government and of government uses of data. Immigration and Customs Enforcement in the United States, for example, has purchased commercially available cellphone location data for its enforcement purposes. A 2019 Pew Research Center report found that 56 percent of white Americans are concerned what law enforcement knows about them, but among Hispanic and black respondents that figure is 67 percent and 73 percent, respectively. That distrust could undermine adoption of digital contact-tracing technology.

Among those who do adopt the technology, race and income disparities are likely to shape its effectiveness. People with low incomes and people of color are more likely to live in apartment buildings and in densely populated areas than their white and higher-income counterparts, who are more likely to live in houses and in suburban areas, and they are more likely to work in jobs that require close proximity to other people and offer no opportunity to engage in remote work. Those living closer together may be more likely to experience false positives, their apps erroneously detecting neighbors as exposure risks, and those working closer together may be unable to act on the notifications they receive.

In an era of profound racial distrust, such inequities threaten to worsen already strained relations between governments and historically disadvantaged communities, and between those communities and health providers. Human-centered tracing systems developed through carefully coordinated planning between public-health authorities, tech developers, and people living in affected communities are better equipped to negotiate these issues in ways that encourage trust. In part for such reasons, some influential jurisdictions have resisted proposals for smartphone-based tracing.

3. Deference to private-sector solutions ignores tech’s role in driving public polarization and amplifying the misinformation and disinformation that undermine public-health efforts.

The public relies on platforms for access to authoritative information about the pandemic—but that reliance has produced widening and deeply entrenched disagreement on both the severity of the threat and the need for a decisive public response. Stark polarization on urgent matters of public health is a direct result of the tech platform business model, which makes public polarization profitable.

The platform business model is a massive effort to use behavioral surveillance to hijack people’s attention by reinforcing their most automatic and tribal predilections and instincts. The basic goal of platforms is to keep their users maximally engaged. To do this, they rely on widespread tracking—monitoring not only what users buy, click, read, and post, but also how long they linger over particular posts or read entire threads, a concept known as “passive tracking.” They track social circulation, collecting data on how often users recirculate content to each other by “liking” or retweeting it. Platforms use all of this data on user preferences and behavior to map networks of like-minded people and design target audiences. They constantly refine their maps and predictions in an effort to generate maximum yield for advertisers and maximum ad revenue for themselves.

The result is showing users exactly what they want to see. Those who credit the well-established scientific consensus on the public-health value of vaccines, those who support a more measured and cautious reopening, and those who reaffirm the equal worth of all people see content that fits their predilections. Anti-vaxxers see anti-vaccination content, white supremacists see white supremacist content, and those who want to “liberate” their state from quarantine or believe the Constitution affords a right against compulsory mask-wearing will see content optimized to those beliefs. Think the virus is a covert Chinese attack or a Democratic conspiracy to destroy the Trump administration? No problem; there are Facebook groups and curated newsfeeds, Instagram influencers, YouTube channels, and Twitter trending hashtags for you. All support lucrative advertising markets, and platforms drive traffic to all of them in order to maximize user engagement. As a 2018 internal Facebook review concluded, “our algorithms exploit the human brain’s attraction to divisiveness.”  

The threats that these polarization engines pose to our long-term public health are stark. When some communities treat noncompliance with public-health recommendations as a badge of honor—and when many in those same communities learn to see COVID-19 as a hoax, or as a disease of the minority urban poor who represent the political enemy—we are guaranteed both an unending series of future outbreaks and further disintegration of the fabric of our democracy. At scale, the results will be catastrophic.

Twitter’s baby steps toward removing official disinformation—flagging a few tweets by President Donald Trump with fact-check notices—and Facebook’s attempt to provide users whose content is removed with a measure of redress by launching a content moderation oversight board are cosmetic moves. The oversight board has no authority to require Facebook to do anything, and it considers only challenges to information that Facebook has taken down—not challenges to information that Facebook deliberately chooses to leave up. Facebook profits from the viral spread of all types of controversial, polarizing information, and it has no interest in making real changes to that model, even though it has long been aware of the model’s effects on public discourse and the democratic process. Twitter, for all of its protests, is in the same boat. Accountability theater is not real accountability.

Existing legal frameworks compound our vulnerability to the harms resulting from platform-amplified misinformation and disinformation. The law that immunizes platforms from liability for their content moderation decisions, Section 230 of the Communications Decency Act, was enacted before the platform business model emerged. It doesn’t address platform uses of personal information to polarize public discourse, and participants in the ongoing debates about the adequacy of platforms’ voluntary efforts at content moderation generally have preferred not to consider such questions.  

Two recommendations on the way forward

  1. We cannot effectively respond to the pandemic by haphazardly privatizing essential functions of public institutions. Right now, governments and tech firms seem to have different goals, strategies, and understandings (or misunderstandings) of key values. What is needed is collaboration around human-centered public-health surveillance, which requires strong data protection mandated by law and extending into emergency response systems. The bipartisan bill recently introduced in the Senate to regulate the privacy practices of private-sector automated notification systems is a good start, but it can’t substitute for better and more accountable collaboration between tech firms and public-health officials or for comprehensive framework legislation on public-health surveillance.
  2. Effective pandemic response also requires regulating the platform business model. Representative Anna Eshoo’s proposed ban on microtargeting in political advertising is a big step in the right direction. But most disinformation flows virally and consensually, circulated within affinity groups and networks and amplified by design decisions that elevate controversy and outrage because it is profitable, and most disinformation originates outside the clearly defined domains of political advertising. Behavioral surveillance and targeted amplification are forms of manipulation akin to many other practices that consumer protection laws already prohibit, and they also represent a large-scale assault on the institutions and knowledge structure of a stable democracy. There is ample reason to prohibit them in all contexts.

Consider a world in which governments and tech firms work together seamlessly and benevolently to fight the spread of COVID-19, backstopped by regulation designed to ensure equity, privacy, and accountability. Public-health officials use automated proximity-detection systems that have been designed to complement human-centered contact tracing, and public-health surveillance legislation mandates strong protection for the personal information collected by those systems. Substantive and structural rules protect against mission creep between public-health activities, other government activities such as policing, and commercial data exploitation practices. Social media facilitate the widespread release of government public service announcements that include clear information about the virus, the disease, and recommended mitigation strategies, and comprehensive privacy legislation strictly prohibits behavioral microtargeting and constrains targeted amplification of social media content.

That is the world we all deserve.

Julie E. Cohen is the Mark Claster Mamolen Professor of Law and Technology at Georgetown Law and the author of Between Truth and Power: The Legal Constructions of Informational Capitalism.
Woodrow Hartzog is Professor of Law and Computer Science at Northeastern University and the author of Privacy’s Blueprint: The Battle to Control the Design of New Technologies.
Laura Moy is Associate Professor of Law and Director of the Communications and Technology Law Clinic at Georgetown Law.

Apple, Facebook, Google, and Twitter provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Authors