Sections

Commentary

What to make of the Trump administration’s AI Action Plan

U.S. President Donald Trump delivers remarks on artificial intelligence at the "Winning the AI Race" Summit in Washington D.C., U.S., July 23, 2025.
U.S. President Donald Trump delivers remarks on artificial intelligence at the "Winning the AI Race" Summit in Washington D.C., U.S., July 23, 2025. REUTERS/Kent Nishimura

Last week, the Trump administration revealed its AI Action Plan. President Donald Trump first directed high-level officials to develop the plan in one of his early executive orders on January 23, 2025, seeking to create a roadmap to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Amid its development, over 10,000 public comments were submitted, resulting in three pillars of development to advance American AI leadership: innovation, infrastructure, and global influence. According to the plan, achieving the goals under these pillars will require building AI infrastructure, leading AI diplomacy among the government, creating quality jobs through AI for American workers, designing systems free from ideological bias, and preventing AI misuse by malicious actors. In this piece, scholars from across Brookings unpack the contents of the plan and discuss its implications in the U.S. and abroad. 

Sorelle Friedler

Plans to interfere with AI-generated speech are not neutral

The AI Action Plan states a goal of encouraging “freedom of speech” as AI is increasingly used to generate and shape how we communicate. This is an important goal, yet, what follows in the document is instead a plan to interfere with private company design and development decisions about speech by ensuring “…AI procured by the Federal government objectively reflects truth rather than social engineering agendas.” 

All AI companies make decisions about what can be generated using their systems. These decisions are built into AI systems both via choices in the training data and procedures, as well as via automated content moderation systems used to filter outputs before they are shown to users. What content is disallowed is determined based on laws and societal norms, as well as company goals. Companies often aim to keep their systems from generating child sexual abuse material, overly violent content, hateful speech, or other undesired company content based on legal and policy goals, including in some cases religious, financial, or political speech. 

These decisions can certainly seem flawed. For example, in research with collaborators, we found that OpenAI’s content moderation system is likely to disallow television violence, even PG-rated animated superhero violence. 

But these decisions are not for the government to make, whether by prohibiting content or requiring it. In a document largely aimed at stripping away regulations, this provision stands out as remarkably heavy-handed. 

Cameron F. Kerry

The action plan adds fuel to the sovereign AI fire

The AI Action Plan includes an executive order on “Promoting the Export of the American AI Technology Stack,” aimed at promoting the export of full-stack American AI technology packages. By coincidence, the release of the plan came as I was moderating a Brookings and Centre for European Policy Studies roundtable focused on the burgeoning concept of “sovereign AI,” featuring presentations on two prominent examples: the Eurostack and India Stack. This trend around the globe reflects a desire to benefit from AI “while retaining control, agency, and self-determination over such systems.”

The “American AI technology stack” frame plants an American flag smack into the AI sovereignty space. Promoting American technology products advances U.S. AI leadership, jobs, and economic growth, and counters China with the aim of hindering our allies’ increasing dependency on Chinese technology through the “distribution and diffusion of American technology.”  Like-minded international partners should question whether to rely on AI products that are subject to “Xi Jinping Thought” and the “comprehensive leadership” of the Communist Party of China.

As the leader in AI research, development, and investment, America has a lot to sell. But the AI stack order may make this sell harder. It envisions picking industry consortia that put together “a full-stack technology package”—including compute, data systems, models, and applications—that will be eligible for U.S. government promotion abroad. Leading tech sovereignty advocates recognize they cannot wean themselves entirely from the U.S. and need to focus on strategic areas and comparative advantages. As one puts it, “[t]he challenge … is finding a balance between AI sovereignty and global collaboration.” An all-or-nothing sales pitch from the U.S. will make it hard to find this balance and heighten concerns about dependence on U.S. technology.

Aaron Klein and Jude Poirier

AI's role in financial markets

The AI Action Plan does little to address concerns about implementing AI in financial services, particularly credit underwriting. Consumer credit decisions are dominated by the Fair Isaac Corporation (FICO), which integrates AI into its credit score algorithm. In line with the Trump administration’s deregulatory stance toward AI, it has dismantled the Consumer Financial Protection Bureau (CFPB), the federal regulator of FICO. This raises concerns over the ability of the Trump administration to oversee the use of AI in deciding who gets access to credit.  

The AI Action Plan has a laudable goal that “AI systems must be free from ideological bias and be designed to pursue objective truth.” However, AI operates on existing data, collected and processed over decades of discrimination. For example, LLMs have demonstrated racial biases in mortgage underwriting, likely contributing to the stagnancy of Black homeownership rates over the past 50 years. To give credit to another part of the Trump administration, the Federal Housing Finance Agency (FHFA) has taken a positive step to address these problems by expanding credit allocation beyond FICO scores in mortgage finance. This is a positive step forward, in line with the executive order’s goal of embracing new technology. 

In its plan to spur AI innovation, the administration’s AI Action Plan neglects problems with AI’s incorporation of existing biased data. To achieve the plan’s goal of an unbiased AI, we must start by acknowledging that data used in AI training has decades of discrimination baked into it. We hope that implementation of the plan will promote reforms like the FHFA moving away from FICO toward alternative systems. 

Raj Korpan

Gutting the NSF undermines America’s AI strategy before it starts

The Trump administration’s AI Action Plan signals a troubling shift away from safe, accountable AI toward rapid, private-sector-driven deployment. It touts openness, innovation, and American leadership, while simultaneously gutting the very agency it relies on to deliver those goals: the National Science Foundation (NSF). 

The plan assigns NSF sweeping mandates: Lead new AI research labs, expand the National AI Research Resource, develop testbeds for real-world evaluation, and invest in trustworthy, interpretable AI. It also directs NSF to support AI-enabled science, workforce development, and equitable access to advanced computing resources. 

While the NSF is expected to carry out this agenda, the Trump administration has simultaneously defunded, politicized, and destabilized it. More than 1,600 active grants, supporting everything from STEM education to foundational AI, have been abruptly canceled using political criteria. Staff from the Division of Equity for Excellence in STEM have been terminated. Billions in congressionally authorized funds remain illegally impounded. Meanwhile, a covert political review process is replacing peer review with ideological loyalty tests. 

Investing in AI should not mean sidelining ethical obligations. The public sector, particularly NSF, provides a vital counterbalance to concentrated private tech power. It enables long-term, interdisciplinary work to make AI safe, robust, and responsive to the real-world needs of all communities. Defunding that capacity while accelerating deregulation does not just threaten scientific leadership, it undermines public trust and institutional legitimacy. 

We don’t need to choose between innovation and integrity. A serious national strategy would fund both. If the United States wants to lead the world in trustworthy AI, it must start by protecting the institutions that make trust possible. 

Ivan Lopez

We need rigorous evaluation, not a "try-first" culture, to meaningfully adopt AI in health care

Clinicians and researchers are urging for a more cautious pace for AI integration in medicine, emphasizing rigorous evaluation and governance. Despite this, the Trump administration’s AI Action Plan pushes for rapid deployment across health care institutions, calling for a “try-first” culture for AI across American industry. At the same time, research shows AI studies continue to fall short of the standards needed for safe and reliable clinical use, and one systematic review found just 5% of large language model (LLM) studies incorporated real-world patient care data. AI evaluations still rely heavily on metrics that often do not translate to clinical utility or reproducibility. 

The AI Action Plan’s proposed “AI Evaluations Ecosystem” points in the right direction by supporting the development of the science of measuring and evaluating AI models. Its success, however, hinges on real-world adherence to the evidence it generates. For example, new frameworks and guidelines have already been proposed to improve the quality of AI studies in health care, though due to inconsistent adoption, many AI applications remain inadequately assessed. The plan’s call for regular gatherings of federal agencies and the research community could help bridge that gap by circulating best practices and fostering rigorous evaluation. 

The AI Action Plan also calls for a network of centralized “AI testbeds” within its proposed evaluation ecosystem. Yet, shared datasets alone are insufficient as they seldom capture an individual institution’s unique patient population, and models that excel on one dataset can fail when deployed elsewhere. Therefore, we must prioritize prospective studies, such as translational trials, and ensure that even under-resourced health systems have the support to run them on their own data to anticipate real-world impacts. These analyses probe generalizability, expose data and algorithmic bias, and confirm reliability before a patient is harmed. 

We should not treat health care as a mere productivity frontier and ignore the stakes: mis-triaged emergencies, biased predictions, and the erosion of patient trust. Until rigorous evaluation is the default, fast-tracking AI into clinical care will cause more harm than good. 

Mark Muro

How well does Trump’s patchwork AI plan support regional “readiness?”

Brookings’ recent work in evaluating AI “readiness” in U.S. cities provides a helpful framework to evaluate President Trump’s new action plan for AI.  

Central to the “readiness” framework are its pillars focused on “talent,” “innovation infrastructure,” and “business adoption,” as well as the claim that national AI ascendancy depends heavily on local economic dynamism and worker support. Trump’s new agenda speaks fairly well to three of the proposed regional readiness priorities, while ignoring the latter two considerations. To its credit, the action plan nods to the importance of boosting basic and applied research, consistent with the readiness agenda. However, it refrains from mentioning “universities” under either the “Advance the Science of AI” or the “Invest in AI-Enabled Science” sections. Though the plan recommends the National AI Research Resource (NAIRR) to democratize access to “compute,” it made no mention as to investment in the AI institutes program beyond elite academic institutions. 

Similarly, the plan’s section on building up a skilled AI workforce answers the readiness framework’s call to expand AI-ready talent pipelines. Yet, the plan refrains from suggesting mechanisms to increase inflows of foreign students and scholars, who have been proven to strengthen American technological innovation. That’s a huge gap. 

And while the plan aggressively addresses multiple energy, permitting, and data center policy issues elevated in the Brookings report, the plan frames environmental regulations as mere impediments to U.S. domination, rather than frameworks for managing complicated tradeoffs, ignoring the nuances necessary to devise sound, sustainable policy.  

Missing entirely is a discussion of the federal AI policy’s role in scaling up regional cluster development. Federal-level policy can provide meaningful worker-adjustment and active labor-market policies to offset the labor disruptions of AI, as the Brookings “readiness” agenda advises. 

Still, the new agenda, if thoughtfully implemented and funded, could be moderately helpful to advancing AI readiness in U.S. regions.

Chinasa T. Okolo

AI competitiveness starts with maintaining sufficient research talent

Fundamental research conducted at academic institutions is the foundation of AI development. Thus, the federal government must prioritize efforts to fund and train AI researchers at the graduate level to help increase national competitiveness. However, recent efforts from the Trump administration to drastically reduce the budget of the National Science Foundation (NSF), a significant funder of computational research, are in opposition to many of the objectives within the AI Action Plan.  

As a recent computer science Ph.D. graduate from one of the top computing programs globally, I personally understand the investment needed to sustain top-tier AI research at academic institutions. While the plan mentions the need to develop the federal government’s AI talent pipeline, increased investments in AI research should focus on developing new AI-centered fellowship and scholarship programs, partnering with tech companies to offer high-quality internship and apprenticeship programs, and expanding current efforts like the NSF’s National AI Research Institutes and the NSF’s Computer and Information Science and Engineering Graduate (CSGrad4US) Fellowships program, which encourages domestic bachelor’s degree holders who have been working in industry roles to pursue computing-related doctoral programs. In the end, there will be a shortage of skilled experts in the current and future iterations of AI applications. 

Stephanie K. Pell

Securing an American AI advantage requires federal agencies to meet the moment

The fact that the Trump administration’s new AI Action Plan embraces deregulation and promotes American AI dominance in the global arena should be no surprise. At a February AI summit in Paris, Vice President J.D. Vance gave a speech proclaiming “that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off, and we’ll make every effort to encourage pro-growth AI policies.”  

As reflected in the introduction to the plan, the development and proliferation of American AI to “wi[n] the AI race,” is critical to U.S. economic and national security. Winning this race will also require the U.S. to protect its “advanced technologies from being misused or stolen by malicious actors.” To address such threats, the action plan outlines a number of information-sharing efforts, calls for guidance directed at remediating AI-specific vulnerabilities and threats, and urges action to evaluate national security risks in U.S. AI frontier models.  

The action plan anticipates that the Department of Homeland Security (DHS), National Institute of Standards and Technology (NIST), Department of Commerce (DOC), Office of the Director of National Intelligence (DNI), and other federal agencies will lead these efforts, which include recruiting “leading AI researchers at Federal agencies…to offer cutting-edge evaluations and analysis of AI systems.” But in the wake of the administration’s efforts to purge federal workers at scale and otherwise demoralize those who stayed, it’s unclear whether federal agencies can rise to the occasion.  

It remains to be seen whether federal agencies can both retain sufficient key personnel and project a sufficiently attractive employment prospect to motivate other qualified candidates to join them in running a race of any kind. 

Landry Signé

Trump’s AI Action Plan needs agile governance

The new AI Action Plan rightly emphasizes and elevates the role of the private sector to unlock the potential of AI while reducing bureaucratic red tape, and its focus on developing an American AI workforce is commendable, positioning the country to outcompete rivals and accelerate breakthrough innovations.  

The plan does not, however, give sufficient attention to the crucial dimension of AI governance. Its lack of focus on accountability, ethics, and transparency creates real risks and consequences when it comes to protecting the public from unregulated AI systems, erosion of privacy, algorithm bias, polarization, misinformation, exploitative surveillance, unchecked corporate control over critical technologies, unintended consequences on democratic governance, national security threats, and more. The plan underutilizes the role of regulators and public institutions in balancing two core challenges: First, the pacing problem, where law lags far behind fast‑moving technology, and second, the coordination problem, where fragmented agency responses slow adoption and erode trust. 

The administration could better bridge the gap between ambition and responsibility by incorporating “agile governance” into the action plan. A forward-looking approach to policymaking and regulation, agile governance allows governments to successfully address pacing and coordination challenges, seizing phenomenal opportunities while managing risks and ensuring the provision of public goods. Rather than relying on traditional top-down mechanisms, agile governance treats policymaking as a multi-stakeholder process that continuously learns and adjusts in response to change, whether anticipated or sudden, and advocates for governance systems that are flexible, data-driven, and focused on achieving measurable outcomes.  

Given the scale and complexity of AI innovation and geostrategic competition, successful approaches are those that promote innovative capacity across the public and private sectors through mechanisms such as co-creation, participatory design, and regulatory experimentation (e.g., sandboxes). Such tools and mindsets will not only enhance competitiveness but also ensure end products that better serve citizens and humanity in fair, transparent, and accountable ways. 

Nicol Turner Lee

The AI Action Plan’s efforts to be ideologically free will be virtually impossible

One of the first goals in the Trump administration’s AI Action Plan is to eliminate any references to misinformation; diversity, equity, and inclusion (DEI); and climate change in current frameworks. The administration reasons that AI systems should be void of any ideological bias and instead pursue the objective truth—but this outcome holds different meanings for individuals and communities. The plan goes further to suggest that AI systems not pursue “social engineering agendas” for users seeking factual information. This is in stark contrast to the Biden administration’s executive order, which aimed to develop policies and programs to weed out bias and discrimination to ensure a fair information ecosystem.  

Under the new regime, such risk mitigation efforts conducted by the National Institute of Standards and Technology (NIST) will be eliminated, and the jurisdiction of the Federal Trade Commission (FTC) to discover company liability will be scaled back.  

It will take a herculean effort for the administration to apply “truth” to AI systems, especially those whose learning models are based on online information that we know is inherently biased due to the historical and societal norms from which it learns. AI models are also influenced by developers whose values, norms, and worldviews factor into their reasoning behind the model’s design. Finally, AI models deployed for particular sectors, including health, education, finance, and employment, among others, assume and rely on distinct differences between consumers in their respective markets to determine eligibility. Thus, what these and other instances suggest is that it is highly improbable to train AI on data that has not otherwise been impacted by the lived experiences of people and their communities. 

For these reasons, the AI Action Plan has an opportunity to fund the development of more robust and inclusive datasets to cultivate tools that help target and cure some of the most existential and unique threats in our society, whether related to misinformation or the discovery of unknown health cures. The U.S. should be considering investments in research to better understand bias identification and mitigation in algorithmic decisionmaking and unrepresentative large language models (LLMs), rather than supporting sanitized training datasets, which are referenced later in the plan, that run the risk of homogenizing or artificially constructing humans and their conditions. Further, safeguards that elevate consumer protection when faulty AI models deliver the wrong decisions or contribute to increased liabilities will be required for America to lead the global AI race and gain the trust of consumers.  

The objective truth is that any data feeding AI systems tend to be highly personal to users or reflective of the diverse and expansive behaviors of individuals, communities, and organizations online. If we want the rest of the world to purchase our products, it is important that individuals trust the applications and feel that they are reliable, responsible, and representative. Failure to achieve this will make it difficult for the AI Action Plan to achieve its goals around the world.

Judy Wang and Nicol Turner Lee

Trump’s AI plan quietly guts copyright protections

In its new 28-page AI Action Plan, the Trump administration covers significant ground from infrastructure to research to international diplomacy. But despite persistent calls from major tech companies for clarity on the use of copyrighted materials in AI training, the plan makes no mention of the issue. That omission, however, should not be mistaken for neutrality.  

In a revealing unscripted moment during his speech, President Trump made his stance clear, stating that paying for every data point being used to training AI models is simply “not doable” and that China is “not doing it.” Departing from the plan’s studied silence on the topic, likely intended to avoid political fallout, President Trump signaled support for allowing AI models to train on copyrighted materials without fair compensation to creators. It is a major win for tech companies, but a position widely opposed by artists, authors, and publishers.  

By framing copyright enforcement as an obstacle to American competitiveness and looking to China, long described as “the world’s leading infringer of intellectual property rights,” President Trump effectively endorses a regime where the Fair Use Doctrine is stretched to accommodate mass data scraping, without reckoning with its legal or ethical consequences. The action plan’s silence, coupled with President Trump’s remarks, reassures the industry that their practices would not trigger political backlash or litigation risk. 

Yet, this stance has real consequences. It marginalizes small creators who lack the resources to litigate against tech giants and deepens the imbalance between tech giants and the cultural labor on which they rely. It undermines the incentive structure that fuels creativity, the very engine that drives innovation. In the race to beat China, the administration appears to be willing to sideline copyright law and the creators it sought to protect, which will only bleed into similar intellectual property debates. 

Darrell M. West

Higher education will be critical to the plan’s implementation

President Trump’s plan calls for greater AI investment, deployment, and innovation so the United States can compete effectively with China, boost national defense, and propel economic development. He wants to expand data centers, invest in AI, encourage exports of American AI, and deploy algorithms throughout the federal government, all of which are noble objectives. 

But America can’t achieve those goals without vibrant higher education support. Universities have long played a crucial role in research and development (R&D). The National Science Foundation (NSF) and many other federal agencies have supported academic research that accelerated the internet, algorithms, wireless technology, and quantum computing, among other advances. It is fair to say the United States would not have the competitive advantage it has today without universities. 

The biggest issue in the unveiling of the chief executive’s AI Action Plan is that it ignores other current actions that are harming the research and innovation ecosystems. Universities have lost important federal grants, foreign students face obstacles in studying in the United States, immigrants are being discouraged from entering our country, and academics are being punished based on their political views. It will be challenging for America to remain competitive in scientific innovation if there is inadequate support for higher education, a weakening of support for R&D, and restrictions on a talent pool that requires smart people from around the world.  

Innovation is closely tied to critical thinking and independent assessment. Recent attacks on higher education create an environment that is not conducive to risk-taking, challenging the status quo, or making new products and services. It is time for the Trump administration to reconcile what its right and left hands are doing. Undertaking such contradictory moves weakens the ability of the plan to fulfill its aspirations.  

Tom Wheeler

AI innovation requires competition

The AI Action Plan begins—correctly—with the observation, “Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits.”

Unfortunately, the plan unveiled will be unlikely to produce “the largest AI ecosystem” necessary for the United States to lead in AI. Its fundamental flaw lies in the absence of a domestic competition policy essential to delivering the kind of broad-based innovations necessary for global leadership. Instead, the plan reassures the dominant AI companies that, in the absence of oversight, they will have free rein to derail market forces that have historically fueled innovation. 

To be clear, no one is calling for what the plan describes as an “onerous regulatory regime.” Crushing regulation serves no one. But companies should not be allowed to write their own rulebook. The AI era demands risk-based, agile, and adaptive oversight different from the static one-size-fits-all regulation of the industrial era. If American AI models and applications are to match the president’s ambitious vision, then policymakers must be as innovative as the AI engineers themselves. 

President Trump’s AI policies face a critical choice between enriching the few big tech firms with loosened regulation or placing guardrails to foster competition and innovation. If it prioritizes deregulation, it will inevitably enrich a few already dominant companies, discourage emerging competitors, slow the development of new AI applications, and open the door for foreign AI innovators to seize market share.

American AI leadership depends on American AI innovation—and that innovation depends on American AI competition.

Niam Yaraghi

The AI Action Plan could strengthen health care

America’s AI Action Plan is a forward-looking strategy that embraces deregulation, infrastructure expansion, and workforce development to secure U.S. technological leadership. It gets many things right. By promoting open-source models, regulatory sandboxes for safe experimentation, and secure access to government data, the plan lays a strong foundation for innovation. In health care, these elements could accelerate breakthroughs in drug discovery, diagnostics, and personalized medicine. Its emphasis on using AI to augment, not replace, human work is a wise recognition of the need to enhance clinical productivity without displacing providers.  

The plan would benefit from stronger signals around the need for interoperability with existing health IT infrastructure, such as building on Fast Healthcare Interoperability Resources (FHIR) and aligning AI data use with robust patient privacy protections like those in Health Insurance Portability and Accountability Act (HIPAA). It also misses an opportunity to promote human-in-the-loop principles, which are essential for ensuring that clinicians remain central to care delivery in an AI-enabled system. These additions would help guide future policy, funding, and development efforts in a direction that balances innovation with trust and accountability.  

If these gaps are addressed, the future of U.S. health care could be transformative. By 2030, we could see interoperable, privacy-preserving AI systems that enable seamless data sharing, real-time decision support, and improved outcomes. Clinicians would be supported by interpretable tools, allowing them to focus on complex care decisions while reducing errors and administrative burden. This future would deliver more efficient and trustworthy care, setting the global standard for AI-powered health innovation. 

Authors

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).