In the wake of the U.S. Supreme Court’s decision to overturn Roe v. Wade last summer, journalists and privacy advocates alike quickly sounded the alarm about the potential for prosecutors to use commercially collected data in abortion-related cases.
Fortunately, that concern has already translated into political action. Legislators in California recently passed A.B. 1242, a law which gives California-based tech and communications companies a way to resist requests for data on digital activities from being used in abortion prosecutions in other states. The law is thus the first in the nation to explicitly block out-of-state investigators from using digital information to query abortion-related actions that are legal in-state. Meanwhile, President Biden has tasked the chair of the Federal Trade Commission to “consider taking steps to protect consumers’ privacy when seeking information about and provision of reproductive health care services.”
Yet ensuring that private data is not misused in abortion-related cases is not the responsibility of policymakers alone. Technology firms also have a critical role to play. As our digital lives lead to evolving social norms about privacy and security, tech firms need to respond to activists, investors, consumers, and the broader public in order to maintain their license to operate. Taking action to stay in tune with social norms may require a combination of shifting data practices toward minimization, implementing end-to-end encryption for private communication, fostering adoption of third-party trustmarks for privacy and security, and producing better transparency reports.
Scope of the problem
The risk that prosecutors will seek to exploit digital information in abortion-related cases is far from hypothetical. In fact, it’s already become reality: as Forbes reported, a teenager and her mother in Nebraska are facing criminal charges for undertaking an illegal abortion after Facebook shared their private messages and other data with law enforcement officials who served a warrant.
The Nebraska case comes just weeks after an investigation recounted in The Markup found that Facebook was not only collecting data from users interacting with crisis pregnancy center websites, but that anti-abortion marketing companies had gained access to the data using Meta’s Pixel advertising tool.
A key concern for privacy experts is that law enforcement agents in states that have banned or heavily restricted abortion will serve warrants on app developers, search engines, data brokers, website operators, wireless carriers, operating system developers, and other businesses that hold information that could be used to infer either a person’s pregnancy or their intent to seek or aid an abortion. The potential sources of incriminating data online are effectively countless, and include everything from online search histories to private communication channels.
Given those risks, businesses have a responsibility to safeguard the privacy and security of their users—and their boards, investors, and the broader public have a responsibility to hold them accountable. Just as companies are expected to uphold strong standards for environmental, social, and governance practices, they should face reputational and financial risk for failing to protect consumer data.
Well-intentioned experts have advised the public to be thoughtful about how their digital trails could reveal their behavior, but the burden should not be placed on consumers to protect their privacy. Most Americans cannot stop using digital services in their everyday lives. Yet most also lack the expertise to assess the risks those services pose, or to know what steps they should take to protect their privacy online.
What can be done
Although protecting user privacy is far from straightforward, there are several basic steps that tech companies can take to protect consumers from being prosecuted from abortion-related (and other) activity. In particular, firms should focus on the following measures:
Take a ‘less is more’ approach
Companies often do not need to collect and store as much data as they do. Firms can reduce the amount of sensitive data they need to protect from hacks or leaks by practicing “data minimization”—collecting only the data that is needed, using collected data only for authorized uses, and retaining as little of that data as possible.
A “maximalist” approach makes data harder to secure, leading to higher operational and regulatory risks. Many companies store hundreds of millions of records without a business reason, yet not all these exposures are insurable. The highest cyber policies written so far are insufficient above about 4 million records. Besides being an insurance concern, voluminous data is a regulatory one. It is considered an unfair practice to retain data for longer than necessary for legitimate business or legal reasons under the U.S. Federal Trade Commission (FTC) Act.
To be good stewards of the data they collect, companies need to shift encryption norms, such as providing end-to-end encryption by default on messaging apps and email. This shift requires careful balancing of human rights risks and opportunities. End-to-end encryption enables freedom of expression, belief, association, and information access, but there are ways that bad actors can exploit encryption to traffic vulnerable people, victimize children, spread hate speech, or facilitate other ills. To guide firms toward mitigating potential adverse human rights impacts from implementation of end-to-end encryption, Business for Social Responsibility published a set of recommendations partly based on a large-scale assessment it performed for Meta. For the Nebraska teen and her mother, full end-to-end encryption would have made it impossible for Facebook to hand over their private messages to local police.
Move toward ‘trustmarks’
Trustmarks are badges, logos, seals, or labels offered by a third-party authority. Common trustmarks include FDA-certified nutrition labels or EPA-certified ENERGY STAR labels.
The technology industry should accelerate ongoing efforts to develop consumer-friendly trustmarks related to the privacy and security of internet-connected devices and services. UL Solutions, a firm that conducts safety science research and translates it into standards, is developing a trustmark for the internet of things (IoT). Meanwhile, the National Institute of Standards and Technology (NIST) is undertaking a pair of consumer cybersecurity labeling initiatives, following President Biden’s Executive Order on Improving the Nation’s Cybersecurity. Companies should seek out opportunities to help shape and amplify these and other proposed trustmarks, and find new ways of communicating security and privacy practices to consumers. Communication must be a central effort if tech companies are to recuperate customers’ trust.
Improve transparency reporting
Transparency reporting is a form of voluntary disclosure that was pioneered by Google in 2010. Initially established as a process for publishing data about government requests for content takedowns, transparency reporting has since expanded to other internet and telecommunications firms to include information about third-party requests for user data, including those from law enforcement.
Still, the content of transparency reports varies widely among firms. To better protect digital privacy after Roe, companies should improve transparency reporting to provide more judgment-useful and comprehensible information. Firms should provide more granular detail about the requests they have received and how they have complied with those requests (at a state level, as Twitter already does). Greater disclosure about what requests a firm received, through what legal mechanisms, under what statute, and how they responded could help individuals and the public better understand their personal risk models.
Activists, regulators, investors, and members of the public have already caused massive shifts in corporate behavior related to environmental, social, and governance practices. It’s now time to focus on pressing for a paradigm shift in firms’ data privacy and security stewardship—particularly in the face of the U.S. Supreme Court’s seismic decision to overturn Roe v. Wade.
Jordan Famularo is a postdoctoral scholar at the University of California, Berkeley’s Center for Long-Term Cybersecurity.
Richmond Wong is an Assistant Professor of Digital Media at the Georgia Institute of Technology’s School of Literature, Media, and Communication.
Meta and Google provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.