Sections

Commentary

Examining the intersection of data privacy and civil rights

Control your personal data stream
Editor's note:

A previous version of this piece erroneously stated that Amazon Ring doorbells used facial recognition technologies.

For historically marginalized groups, the right to privacy is a matter of survival. Privacy violations have put these groups at risk of ostracization, discrimination, or even active physical danger. These tensions have long pre-dated the digital age. In the 1950s and 1960s, the government used surveillance programs to target Black Americans fighting against structural racism, with the Federal Bureau of Investigation’s (FBI) Counterintelligence Program (COINTELPRO) targeting Dr. Martin Luther King, Jr. and members of the Black Panther Party. During the HIV/AIDs epidemic, LGBTQ+ individuals were fearful that with an employer-based healthcare system, employers would find out about a doctor’s visit for HIV/ AIDS and that individuals would then face stigma at work or risk losing their jobs.

Under modern-day surveillance capitalism, interested parties can collect and monetize online data at an unprecedented scale with little scrutiny or limitation. That is why the recent overturning of Roe v. Wade highlights the urgent and pressing need for comprehensive federal privacy legislation, particularly to reduce the potential for further exploitation and manipulation of individuals who seek fair reproductive rights. Further, Congress needs to find consensus around federal privacy legislation to address other surveillance and data collection concerns, in particular commercial surveillance practices that enable discriminatory advertising, racially biased policing, and the outing or surveillance of historically marginalized groups.

Privacy and abortion rights

With Dobbs v. Jackson Women’s Health Organization overturning the precedent set by Roe v. Wade, individuals seeking abortions are put at risk by existing unfettered data collection practices. Since the ruling, many have brought attention to reproductive health apps. Used by individuals to track their menstrual cycles and reproductive health, these apps also collect data that could be used to determine an individual’s pregnancy status. Notably, in 2019, the period-tracking app Flo had to settle with the Federal Trade Commission (FTC), after selling user data to firms including Facebook and Google. Other period-tracking apps have also been known to sell data to third parties for targeted advertising.

These privacy risks also extend beyond the usage of apps designed for reproductive management. Judges have based past convictions of abortion seekers on evidence collected from people’s location data, text messages, and online activity. A company called SafeGraph sold the phone location data of 600 people who had visited Planned Parenthood clinics. In June of this year, it was also revealed that Facebook had been collecting data on individuals visiting websites of crisis pregnancy centers. Internet searches could also be used to incriminate individuals. In 2017, lawyers used a Mississippi woman’s online search for abortion drugs as evidence in a trial on the death of her fetus. In another case in Indiana, a woman was convicted based on text messages to a friend about taking abortion pills.

Without a federal privacy mandate, location, text, and app data could be subject to exposure and exploitation as a result of current controversies over reproductive rights.

Privacy and LGBTQ+ populations

For the LGBTQ+ community, many of whom do not publicly disclose their gender identity or sexuality due to potentially dangerous consequences, data collection and targeting have become critical matters of safety and equal opportunity. For example, the lax privacy policies of some dating apps have placed LGTBQ+ users at risk. A Catholic news outlet obtained Grindr’s location-based data and used it to track a phone belonging to a closeted Catholic priest, who later resigned from his position. Grindr also forwarded user data to potentially hundreds of third parties and shared HIV health data with two outside companies. Beyond dating apps, technologies such as dockless bikes and scooters collect location data that can put LGBTQ+ individuals at risk, especially if it shows that a person has been at a gay bar or LGBTQ+ activity groups. For LGBTQ+ children and teens in intolerant families, technology services such as parental surveillance tools could mean that they could be outed to their families based on their online searches or online activities.

Privacy and targeted advertising

While companies like Facebook recently announced new non-discrimination efforts, ad data can embed historical data that reflects systemic discrimination as people search or gain eligibility for certain products and services. Poorly designed algorithms can also perpetuate these biases in housing, employment, and banking ads. In 2015, researchers from Carnegie Mellon University found that Google ads’ algorithms showed higher paying job advertisements to more men than women. Facebook’s targeted advertising options allowed companies such as Uber to only show job openings to young men, excluding female, non-binary, and older male job seekers.

Such discrimination extends to housing and other essential services. Online redlining has been found among mortgage brokers who used cookies to offer higher interest rates to African Americans and Latinos based on data collected on user behavior and location. In 2019, Facebook (now Meta) and the U.S. Department of Housing and Urban Development (HUD) settled a case over the social media company not displaying housing ads to people based on protected characteristics (including race).

Privacy and religious minorities

Muslim Americans have faced increased scrutiny as many aspects of their privacy have been ignored in the name of national security. A prayer app called Muslim Pro and a dating app, Muslim Mingle, among other apps sold personal location data of their users to the US military and defense contractors. Out of 50 Muslim prayer apps, only five encrypted personal data in any way, while almost all shared data with third parties. The NYPD used digital means as well and tracked Muslims’ location data and name changes to analyze as potential signs of “radicalization.” The online surveillance of religious minorities, especially without intent, demonstrates just how widespread abuse of personal and interest-specific data is among both public and private sector actors in the existing digital economy.

Privacy and activists

Activist groups like Black Lives Matter, have been blatantly surveilled due to the lack of data privacy laws. Law enforcement officials can collect or subpoena social media and location data, undermining the civil rights of activists and protesters. During the 2020 Black Lives Matter protests that erupted after the death of George Floyd, the FBI used a geofencing warrant to collect location data of Android phones that had passed through the area near the headquarters of the Seattle Police Officers Guild (SPOG), which had been set on fire during the protests. Further, documents obtained by The Intercept indicate that the Department of Homeland Security had been surveilling and collecting data on Black Lives Matter activists since the 2014 protests. While the surveillance of Black communities is not new, the wealth of online data and lack of user privacy has exponentially grown, and without guardrails on acceptable and appropriate use, they will continue to thwart the efforts of civil society organizations.

Privacy and policing

While most current data privacy laws focus on how companies handle individuals’ data, legislators should not forget the dangerous impacts unregulated surveillance programs have on civil rights. Law enforcement facial recognition networks include over 117 million American adults and one out of four state and local law enforcement agencies. Private companies like Clearview AI, a prominent commercial provider of facial recognition technology, have been able to scrape publicly available images from websites, similar commercial companies, and various data brokers. Similarly, smart doorbells also pose surveillance risks. Amazon’s Ring has partnered with local law enforcement agents, allowing them to lawfully retrieve video footage from these devices without using a warrant. Meanwhile, Google’s Nest Doorbell utilizes facial recognition technology. Unfortunately, the misuse of such available data, coupled with a range of other surveillance tools, can result in the unlawful arrest of innocent civilians, especially Blacks and Hispanic individuals.

U.S. privacy policy must tackle these use cases

These and other use cases point to the urgency of moving forward with privacy stipulations that ensure protection and some adherence to contemporary civil rights. To be more effective in addressing, and potentially redressing these and other consumer harms, the following might be considered as plausible conditions or contexts for any pending legislation. This includes the American Data Privacy and Protection Act (ADPPA), for which a bipartisan group of legislators released a discussion draft of earlier this year.

Improve and offer strategies for more consumer agency

Privacy advocates and civil rights organizations have developed comprehensive resources on how individuals could better protect their online privacy. The Electronic Frontier Foundation’s Surveillance Self-Defense toolkit provides comprehensive information on how individuals could maximize their online privacy by adjusting their social media and device settings, and the New York Times has concise pointers to digital privacy protections.  Some key tips include:

  • Looking through and changing social media privacy settings to limit publicly available data
  • Using ad and third-party cookie blockers to limit data collected from targeted advertising
  • Look into data collected by devices and installed apps, and change settings to limit access to location, photos, contacts, and other potentially sensitive information.

Pending privacy legislation must include both consumer outreach about these potential harms and ensure “privacy by design” in the architecture of new and emerging software, and other technologies. Doing so will both empower consumers around their online footprint and bring more consumer tools for management of their online data.

Ensure data minimization and clarity in Terms of Services

Existing terms of services are difficult to navigate. Even if users would rather not have certain applications and websites track their online activity and location data, they are often faced with problems with interpreting the plethora of agreements for specific products and services.

As it has been argued that consent frameworks have outlived their purpose, there may be time to discuss how to strengthen disclosures and other forms of transparency to ensure that consumers understand what they are opting in to. The ADPPA covers important grounds in this regard. The legislation calls for data minimization while providing individuals with the means to delete data associated with them and to opt out of targeted advertising and third-party data transfers. Brookings scholar Cam Kerry recently published a piece connecting the dots on how these measures could better protect reproductive privacy. These protections could extend to other ongoing civil rights concerns.

Continue to scrutinize how data privacy shape algorithmic biases

Issues of data privacy and algorithmic biases are interlinked, as exhibited through examples of discriminatory advertising targeting women and people of color, and the inaccurate use of facial recognition technologies to criminalize people of color. The ADPPA also includes provisions on discriminatory algorithms. The federal government has also been working to tackle problems with AI biases, taking for example guidelines from the National Institute of Standards and Technology (NIST) and the proposed AI Bill of Rights from the Office of Science and Technology Policy. Federal agencies have also developed independent initiatives to combat AI biases, taking for example the Equal Employment Opportunity Commission’s work to mitigate disparities in AI hiring. Conversations on privacy and civil rights must also tie back to algorithmic biases and the adverse impacts that they have across different sectors of society.

The U.S. is currently at a crossroads when it comes to agreeing upon and acting on existing legislative drafts for what should constitute data privacy practices. But in considering such changes, the existing contexts of discrimination should matter. At the end of the day, privacy should not be a privilege or a price to be paid for commercial profits. It is about the protection of vulnerable populations: of their dignity and their survival.


Alphabet, Amazon, Facebook, and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.

“Control Your Personal Data Stream” by Elio Reichert is licensed under CC BY 4.0

Authors