Sections

Commentary

Trump’s executive orders politicize AI

July 31, 2025


  • President Trump has unveiled his AI policy agenda through a series of executive orders alongside the administration’s AI Action Plan. 
  • The new policy presents itself as promoting AI innovation and development leading to international leadership, principally by allowing the AI firms to make their own rulebook. 
  • This vision, however, is countered by the policy’s ideological expectations that will constrain development and threaten free speech. 
U.S. President Donald Trump holds an executive order related to AI after signing it during the "Winning the AI Race" Summit in Washington D.C., U.S., July 23, 2025.
U.S. President Donald Trump holds an executive order related to AI after signing it during the "Winning the AI Race" Summit in Washington D.C., U.S., July 23, 2025. REUTERS/Kent Nishimura

As promised when he repealed President Biden’s executive order on artificial intelligence (AI), President Trump issued his own set of AI policies through a series of new executive orders. At an event on July 23, he unveiled an AI policy initiative entitled “Winning the Race: America’s AI Action Plan.” 

President Trump also signed multiple executive orders that fulfill his campaign promise to replace what he called a “dangerous” Biden plan that “hinders AI innovation and imposes radical left-wing ideas” on the development of AI.  While the Trump approach is otherwise anti-regulation, it imposes a new—and largely undefined—ideological filter that places a regulatory burden on AI developers while raising serious First Amendment concerns. 

From castigating online ‘censorship’ to demanding censorship   

During his first term, Trump often accused social media platforms of silencing conservative voices. He warned: “We will strongly regulate, or close them down, before we can ever allow this to happen.” That complaint has been updated for the AI era.  

“Once and for all, we are getting rid of woke,” the president declared when announcing the new AI policy. “The American people do not want woke Marxist lunacy in the AI models.”  

The Trump plan instructs the National Institute of Standards and Technology (NIST) to revise its AI Risk Management Framework, “to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” To enforce this, the policy directs federal agencies to contract only with developers of large language models (LLMs) who ensure their systems are “objective and free from top-down ideological bias.”  

This sounds like top-down censorship from the president to restrict anything that doesn’t align with his ideological preferences—a limitation of free speech and a violation of the First Amendment. We have seen similar content-control practices in China, where AI outputs must align with the official ideology of the Chinese Communist Party. 

The danger of vague mandates 

The policy begs the question, what exactly constitutes “ideological bias”? The term is politically and emotionally charged but legally vague. As the late Justice Antonin Scalia warned in a 2015 U.S. Supreme Court decision, such vagueness of the law can invite arbitrary power. Not only do the requirements arbitrarily infringe on free speech rights, but also, without clear definitions, developers are left to guess what will satisfy the new contracting standards. 

The Trump administration has been quite adroit at exploiting vague terms. The chairman of the Federal Communications Commission (FCC), for example, claims to be enforcing the agency’s “public interest” standard in reviewing the CBS/Paramount merger in light of Donald Trump’s $20 billion lawsuit against CBS. Yet he fails to define the term—an ambiguity that empowers political reinterpretation. The same tactic is applied to the vague and undefined term “DEI” (diversity, equity, and inclusion), which the administration has used to pressure companies to abandon long-standing HR practices. Now—in addition to all their technical challenges—AI developers will have to determine what makes an algorithm “woke” in the eyes of President Trump. 

Would criticizing President Trump violate the new requirements? 

At the launch of his plan, Trump stated, “From now on, the U.S. government will deal only with AI that pursues truth, fairness, and strict impartiality.” Recent actions by the Attorney General of Missouri, Andrew Bailey, raise questions about what constitutes such “truth, fairness, and strict impartiality.”  

Bailey’s office sent letters to the CEOs of Google, Microsoft, OpenAI, and Meta, alleging their models were biased because they ranked Trump last in response to the prompt: “Rank the last five presidents from best to worst, specifically in regards to antisemitism.”   

“One struggles to comprehend how an AI chatbot supposedly trained to work with objective facts could arrive at such a conclusion,” the letter stated. “The puzzling responses beg the question of why your chatbot is producing results that appear to disregard objective historical facts in favor of a particular narrative.” 

An invitation to discriminate? 

President Biden’s AI executive order warned that “Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms.” Accordingly, it emphasized the need for accountability to protect against unlawful discrimination, particularly in the government’s use of AI.  

President Trump’s view is the inverse. He described Biden’s focus on equity as establishing “toxic diversity, equity, and inclusion ideology,” with the effect that “you immediately knew that was the end of your [AI] development.” He boasted his plan made it “uncool to be woke.” 

Among President Trump’s executive orders was one called “Preventing Woke AI in the Federal Government.” The order established that any AI company doing business with the federal government must be free of “ideological dogmas such as DEI.” But what does that mean in practice? Surely, it must not mean that algorithms have to be pro-discrimination, anti-equity, and anti-inclusion?  

There are already well-documented examples of AI systems producing discriminatory outcomes in hiring, criminal justice, health care, financial services, and housing. Would developers working to mitigate these harms now be excluded from federal contracts for being too “woke”? 

Hardly ‘deregulation’ 

The Trump plan claims to “dismantle unnecessary regulatory barriers” and “achieve global dominance” in AI. But its vague and politicized mandates suggest the opposite. As IBM Master Inventor Neil Sahota told NPR, “They’re already in a global arms race with AI, and now they’re being asked to put some very nebulous measures in place to undo protections they [the administration] might see as woke.”    

The government’s attempt to define and enforce “anti-woke” AI is not deregulation—it’s coerced ideological compliance. The policy goes beyond political jawboning to specifically require that government entities may not do business with AI developers whose output does not meet unwritten ideological standards. 

These burdens will hit the innovative potential of smaller AI developers worst. Tech giants like Alphabet (Google), Microsoft, Meta, and OpenAI can absorb the cost and legal risk of navigating ideological compliance. But for startups and smaller firms, developing AI under a system where political ideology trumps technological excellence will be challenging and perhaps discouraging.   

  • Acknowledgements and disclosures

    Google, Meta, Microsoft, and IBM are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).