Sections

Commentary

The global AI race: Will US innovation lead or lag?

Is Congress about to hand China the AI future?

Equipment for testing recently launched Amazon artificial intelligence processors that aim to tackle Nvidia and the chips made by the other hyperscalers such as Microsoft and Google are seen at an Amazon lab in Austin, Texas, U.S., July 19, 2024.
Equipment for testing recently launched Amazon artificial intelligence processors that aim to tackle Nvidia and the chips made by the other hyperscalers such as Microsoft and Google are seen at an Amazon lab in Austin, Texas, U.S., July 19, 2024. (REUTERS/Sergio Flores)

As the Trump administration prepares to enter office, Congress is quietly advancing proposals during the lame-duck session that could have far-reaching implications for artificial intelligence (AI).

Throughout 2024, Congress has grappled with whether and how to regulate AI, spurred by concerns about potential risks ranging from bias in hiring systems to fears of runaway “superintelligence.” Indeed, the recent Nobel Prize winner in physics, Geoffrey Hinton, also known as the “godfather of AI,” has similarly warned about the risks associated with AI systems surpassing human intelligence and escaping human control as these models are integrated into decisionmaking systems. Elon Musk has said AI could be “more dangerous than nukes.”

Although these risks are speculative, AI’s exponential growth has breathed oxygen into proposals for new AI laws. Within five years, generative AI models went from 1.5 billion parameters to over 100 billion, an increase that captures the capacity to go beyond word prediction or summarization and actually conduct reasoning tasks. The rate of increase has surprised developers themselves. What might these exponential increases say about the balance of power between humanity and technology in two more years?

The current Congress may believe it faces a closing window of opportunity to pass new AI laws, anticipating that the Trump administration’s positions will be less predictable. President-elect Donald Trump has highlighted AI tech as a “superpower” with “alarming” capabilities and could be receptive to AI regulation. However, he has also promised a lighter touch on tech regulation, pledged to repeal President Joe Biden’s executive order that issued the most comprehensive federal framework for AI oversight to date, and emphasized AI as a tool to “take the lead over China.”

While AI development presents risks, rushing to legislate is ill-advised. Even if the Trump administration scraps Biden’s executive order, existing legal statutes already address many immediate concerns, and the potential for overregulation threatens both U.S. economic innovation and national security leadership—particularly in the context of global competition with China.

Strong foundations, risky overreach

While some legislators and AI experts have called for new AI laws, recent evidence undercuts the notion of an imminent AI-driven apocalypse. “AI scaling laws”—the methods and expectations that labs have relied on to dramatically improve model capabilities over the last five years—are beginning to show signs of diminishing returns, meaning advances are being made less quickly than before. Not only are we far from the edge of sentience but we are likely to remain in the narrow, task-specific AI—playing chess, recommending products, or making one’s day-to-day decisions—that cannot independently form the complex, abstract goals necessary for large-scale autonomous (and nefarious) action.

Further, the United States already has a robust legal and regulatory framework that addresses many immediate AI concerns. The Equal Employment Opportunity Commission (EEOC), for example, can investigate discriminatory hiring algorithms under Title VII of the Civil Rights Act. Intellectual property disputes involving AI outputs can be resolved through the 1976 Copyright Act, which has already adapted to evolving technologies. Similarly, consumer protections, such as the Fair Credit Reporting Act, provide recourse against biased decisionmaking in financial systems. Recent city-level innovations, like New York City’s mandate for bias audits in automated hiring systems, show how established principles can be applied to AI without federal action.

Despite these strong foundations, recent U.S. regulatory proposals highlight the dangers of overreach. The Biden administration’s 2023 executive order on AI required “red teaming” to identify vulnerabilities in AI models and mandated extensive reporting on cybersecurity and development practices. A 2024 Commerce Department rule mandated detailed disclosures that could expose proprietary technologies and undermine competitive advantages.

In August 2024, the Department of Justice (DOJ) sued RealPage, which uses software to help landlords make data-driven pricing decisions by analyzing real-time market data. Critics countered that the DOJ’s lawsuit was driven by a misunderstanding of how algorithmic tools function, a challenge when trying to regulate complex and rapidly evolving technologies.

Indeed, the European Union (EU) provides another cautionary tale. The General Data Protection Regulation (GDPR) increased compliance costs and disproportionately hurt smaller firms. Research indicates that GDPR compliance reduced profits by 8% and stifled growth. Similarly, the European AI Act (AIA) aims to establish high global standards for AI safety but introduces burdensome risk assessment and transparency mandates. Compliance costs, estimated at €400,000 (about $423,000) per company, are projected to reduce AI investment in Europe by 20% over the next five years. Rather than fostering innovation, the EU’s regulatory-first approach risks deterring investment and driving talent and capital to less restrictive markets like the United States and Asia.

Increased reporting and compliance costs tend not to affect big tech firms, but they do squeeze out smaller start-ups, whether due to unaffordable compliance terms or because they divert scarce resources away from research and development. These barriers may also deter new entrants, narrowing the diversity of AI developers and limiting breakthroughs.

The results for the tech climate and outcomes are clear. In 2023, European start-ups raised less than half the venture capital funding of U.S. companies. American tech start-ups often have access to more capital and resources, enabling them to develop and scale innovative technologies faster.

In the area of AI specifically, the United States attracted €62.5 billion (approximately $66 billion) of private AI investment, while the EU and U.K. combined secured just €9 billion (approximately $9.5 billion). The majority of large language models originate in the United States, with negligible contributions from Europe.

The United States continues to lure European tech talent with bigger salaries—one study in June 2023 showed that German or French tech salaries are 48% and 37% respectively of the same job’s salary in the United States. These regulations and investments have clear consequences. The top seven tech companies in the United States are 20 times bigger than Europe’s seven largest and generate 10 times more revenue.

AI leadership as a geopolitical imperative

Nowhere is the need for research and development flexibility more important than in AI. AI lies at the heart of U.S.-China geopolitical competition, with both countries recognizing its transformative potential for economic growth and military dominance.

The United States retains a significant lead in AI development. An EU report in April 2023 found that 73% of large language models are being developed in the United States, compared to China’s 15%. Stanford’s Global Vibrancy Tool, which measures AI patents, investment, and papers by country, corroborates American leadership, revealing that in 2023, the United States attracted far more AI-related private investment than China ($67.2 billion to $7.8 billion). In terms of AI research productivity, China had caught up with the United States in 2010 but has fallen a bit behind, according to an article in Nature.

However, this lead may not last. China’s 2017 New Generation Artificial Intelligence Development Plan outlines a bold vision for AI supremacy by 2030. Through massive state-led investments and its strategy of military-civil fusion, China is rapidly integrating advancements from its commercial sector into military operations. Technologies like autonomous drones, surveillance systems, and AI-driven decisionmaking tools demonstrate how civilian AI developments are directly fueling military innovation.

The United States, by contrast, has historically relied on the strength of its private sector to drive technological innovation, with breakthroughs in fields like aerospace, semiconductors, and computing often originating in commercial industries before being adapted for national security purposes. AI is no different. Companies like OpenAI, Anthropic, Google, and Microsoft are leading in cutting-edge research, with innovations that not only shape the civilian economy but also hold transformative potential for military applications, such as predictive analytics, autonomous systems, and advanced cybersecurity defenses.

Toward a globally competitive AI policy

To compete with China’s strategic advances, the United States should enhance its strengths by maintaining a light-touch regulatory approach that fosters innovation while addressing risks through targeted, flexible measures. For example, collaborative frameworks like the Frontier Model Forum, launched by leading private-sector AI developers, demonstrate how industry-led efforts can address shared risks without stifling progress. These initiatives show that innovation and accountability are not mutually exclusive and that the private sector can lead in crafting responsible AI practices. Critics may argue that self-regulation lacks enforceability, but it offers a pragmatic path forward, particularly in an industry as dynamic as AI.

Retaining and cultivating talent is also a decisive factor in this competition. China has aggressively recruited top AI researchers through its Thousand Talents Plan and other initiatives, while simultaneously nurturing domestic talent pipelines. The United States can counter this by expanding visa programs to attract and retain international researchers and by investing heavily in domestic STEM education to build a robust AI workforce. Ensuring that the best minds in AI choose to innovate within the United States will be essential for maintaining leadership in the field.

At the same time, Congress should not dismiss the potential for narrowly targeted legislation where the risks are clear and urgent. For instance, the proposed Defiance Act, which aims to regulate harmful uses of nonconsensual, explicit deepfake technology, illustrates a focused approach that addresses tangible threats without overburdening the broader AI ecosystem. However, it also highlights the challenges of legislating in this area, since it could impact protected forms of expression and be difficult to prove (the nonconsensual nature) and adapt technologically as AI tech rapidly evolves.

Striking this balance is critical. Poorly crafted laws could discourage investment, deter start-ups, and compromise American leadership in AI. By fostering innovation and retaining top talent through flexible policies, the United States can remain at the forefront of the global AI race while ensuring accountability and ethical development.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).