Sections

Commentary

Addressing overlooked AI harms beyond the TAKE IT DOWN Act

December 11, 2025


  • The TAKE IT DOWN Act, signed more than six months ago, stands among the first federal laws to meaningfully address AI-related harms.
  • While the law targets NCII—a prominent harm that has drawn considerable state-level attention—many less visible risks remain largely unaddressed.
  • Policymakers must build stronger governance and enforcement systems that can identify and respond to these quieter risks as effectively as they do the more graphic ones.
Two people are illustrated in a warm, cartoon style, one on the left and one on the right. The person on the left, who has their back to the viewer, and is typing on a laptop which is sitting on a table. They are white, their hair is shoulder length and dark, and they are wearing a green t-shirt. The computer screen is dark with rows of coloured squares representing programming. The person on the right looks similar but their hair is now tied back in a pony tail, and they are wearing a white lab coat and safety goggles. They are reaching down to lift up an orange hazard label which is about the size of a book. The label is an orange square with a black exclamation mark in the middle. The person looks like they are being careful as they lift it.
Yasmin Dwiputri & Data Hazards Project / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

It was stories like Elliston Berry’s, who was 14 years old when a classmate used artificial intelligence (AI) to generate nude images of her, that helped galvanize bipartisan support for the TAKE IT DOWN Act. Just under a year after her testimony, the act passed the House of Representatives 409-2 and was signed by President Donald Trump on May 19, 2025, criminalizing the knowing publication of non-consensual intimate imagery (NCII). Amid myriad concerns about the advent of AI, the act could be considered this Congress’ first bipartisan legislative action directly targeting harms from AI-generated content.   

It is unsurprising that this bill was an early success due to its bipartisan support for both privacy and consent for minors. Both parties have long sought to address the harms social media can have on young people, signaling that lawmakers might be willing to protect individuals from public embarrassment and bullying. First Lady Melania Trump, who made cyberbullying a focus during the first Trump presidency, also publicly supported the bill. 

But in the six months since the act was signed into law, other federal bills addressing AI harms have struggled to see the same level of support despite lawmakers introducing many even before the rise of generative models. Congress should build upon the protections the TAKE IT DOWN Act offers, providing redress to victims of less visible, but oftentimes equally troublesome, harms from AI-facilitated discrimination. 

What is the TAKE IT DOWN Act?  

The law introduces criminal penalties for the publication of NCII (including “digital forgeries,” such as deepfakes and digital manipulation) and requires platforms to set up a process to remove such content (and make reasonable efforts to remove known identical copies) within 48 hours of a valid written notification from a victim. While the criminal penalties took effect upon the bill’s signing, platforms have until May 19, 2026, to implement their takedown processes, which will be enforced by the Federal Trade Commission (FTC). People who violate the act are subject to fines or imprisonment for up to two years, and these penalties increase if the content depicts minors. These mechanisms for protecting children online were widely lauded, even by a variety of organizations who opposed the bill as written out of concern that the removal process implicates the First Amendment and could result in overbroad censorship.  

Legislative action on NCII is still necessary and overdue, but it should be seen as a starting point of congressional action on mitigating harms related to AI.  

The drivers for addressing high-visibility harms 

In the 118th Congress, lawmakers introduced more than 150 AI-related bills, yet none saw the same level of support as the TAKE IT DOWN Act. Some of the more popular measures included the Algorithmic Accountability Act, originally proposed in 2019, which would require companies using automated decisionmaking systems (including AI) to conduct impact assessments for bias, fairness, privacy, and security. Another effort was the National AI Commission Act, which would establish a bipartisan commission to review the U.S. approach to AI regulation and recommend regulatory structures. Other bills were introduced to address algorithmic discriminationharms to civil rightsinformation integrityelections, and personality or likeness rights. 

The TAKE IT DOWN Act was in the legislative queue since June 2024. Yet, likely due to its focus on a highly visible, emotionally salient harm that was easily explainable to the public, the bill garnered more attention and led to swifter legislative action than other proposals. 

By July 2025, at least 47 states had enacted one or more laws regulating deepfakes, although the scope and remedies varied. Many of these state laws seek to provide victims of NCII or revenge porn with recourse, similar to the TAKE IT DOWN Act, allowing those targeted to identify and mitigate harm in real time and reduce reputational risk. Aside from the focus on victims, there is often a strong emphasis on children’s safety and preventing irreparable damage to them, similar to other bills garnering attention in Congress like the Kids Online Safety Act (KOSA).

KOSA and the TAKE IT DOWN Act implicate speech in similar ways, with some organizations expressing concern that enforcement will be used to block or remove certain speech. When the takedown process goes into effect next year, its enforcement will show how effective the act is in protecting constituents versus targeting specific content that could quickly become a legal challenge. There are also unanswered questions about the act’s intersection with Section 230, which generally provides platforms immunity for user-generated content, as the text does not seek to amend these protections. 

When the harm is less visible 

Algorithmic discrimination is persistent and a well-documented harm, impacting people’s quality of life, including in housingcreditpolicing, and hiring decisions—well before the rise of AI-generated deepfakes of celebrities online.  

Vulnerable populations have been and continue to be systemically and disproportionately impacted given the bias of the data fed into these systems. For instance, housing algorithms typically weigh a variety of factors and information when deciding whether to deny or approve a potential tenant, but these algorithms’ outputs have included notable mistakes, including denials based on criminal histories, even when a charge is dismissed or expunged, or the software mistakenly identifying the applicant with someone else’s record. Even when these tools don’t flatly get something wrong, they rely on factors like credit scores or eviction histories that reflect historical discrimination and can more often deny people of color loans or issue them higher rates. 

Similar results are well documented in the hiring process. Research has repeatedly shown resume screening technology can discriminate against applicants due to race, gender, and the intersection of these identities. Yet, given the variety of considerations that affect the hiring process, applicants will likely never know why they didn’t move forward for a position and lack a statutory right to a specific explanation. 

In these less visible use cases, the process to prove algorithmic discrimination is more difficult. It can take a long time and be arduous to enforce, requiring a longitudinal compilation of evidence of discrimination and more time to persuade regulators and lawmakers to act against the harm. Amid the current rollbacks of protections against discrimination and the disparate impact standard, this process has only become more difficult and opaque. 

Thus, these outcomes remain overshadowed. While people have long been aware of computers and algorithms shaping decisions, the release of ChatGPT in 2022 was the first time many directly encountered generative AI—and had personal access to it. Soon after, online users could watch image-generation systems evolve in real time, with outputs becoming increasingly difficult to distinguish from human-created images.

Thus, these outcomes remain overshadowed. Many people have been familiar with computers or algorithms being involved in decisions for years, but ChatGPT’s release in 2022 was the first time many encountered generative AI models, yet alone, had personal access to them. And then online users were able to watch the progression of image outputs from these systems as they became decreasingly discernible as AI generated. 

But amid this change, people subjected to the harms of algorithmic discrimination still had little to no transparency that the model had discriminated against them or otherwise impacted their life. Congress must do the tough work of addressing all AI harms, not just the threats that are more visible. AI models have often been described as “opaque” or “black boxes,” but that does not mean we can ignore risks that we cannot see.  

Policymakers must build stronger governance and enforcement systems that can identify and respond to these quieter risks as effectively as they do the more graphic ones. These measures should provide victims of both with the agency the TAKE IT DOWN Act provides. Requiring companies to conduct ongoing impact assessments of automated decision systems used in critical areas would more explicitly test for material negative impacts and document how such harms are mitigated. Just as consent is a problem with deepfakes, people are often subjected to the whims of tech companies and algorithms without a choice and without recourse to address how they’ve been harmed.  

Where to go from here 

Protecting Americans requires more than safeguards against the most visible problems: It means bolstering data access, auditing regimes, enforcement capacity, transparency, and considering more robust harms in legal frameworks.  

In addition, legislative attention is needed for stronger whistleblower protections. Employees and contractors are often the first to identify algorithmic harms, given their access to proprietary data, but may lack the legal protections to report them or face company retaliation. Strengthening whistleblower channels, especially around trade secret and retaliation protections, could increase transparency on harms that otherwise would be more difficult to spot. 

At the state level, attorneys general in several jurisdictions have issued guidance on how emerging AI technologies can still perpetuate algorithmic discrimination and violate existing antidiscrimination laws. State legislators have also been active: Colorado and Illinois both have laws with protections against AI-facilitated discrimination and legislative proposals have been floated in California, ConnecticutVirginia, and Texas, among other states. Many of these states are heavily influenced by the EU AI Act which requires impact assessments and auditing for AI systems to evaluate potential discriminatory harms. But with recent congressional and executive threats to preempt state efforts to regulate AI, these actions may only extend so far. More robust protections can provide additional transparency and agency in the face of such discrimination. 

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).