The following is a summary of the 42nd session of the Congressional Study Group on Foreign Relations and National Security, a program for congressional staff focused on critically engaging the legal and policy factors that define the role that Congress plays in various aspects of U.S. foreign relations and national security policy.
On June 24, 2025, the Congressional Study Group on Foreign Relations and National Security held a virtual discussion on the global diffusion of artificial intelligence (AI) technology, and how (and whether) the United States should regulate it. The session followed the Trump administration’s recent decision to rescind a controversial rule issued by the Biden administration that would have used U.S. export control laws to limit diffusion and shape how AI technology develops and spreads overseas. But how does this phenomenon of AI diffusion intersect with U.S. strategic interests? And to what extent can U.S. export control laws and other measures truly be used to control it—both now, and into the future?
The study group was joined by four experts in AI law and policy:
- Aaron Cooper, a partner at Jenner & Block and until recently a deputy legal advisor on the National Security Council;
- Janet Egan, a senior fellow in the Technology and National Security program at the Center for a New American Security and a former Australian government official;
- Kevin Frazier, the AI innovation and law fellow at the University of Texas at Austin School of Law and a senior editor at Lawfare; and
- Nikita Lalwani, an attorney at Jenner & Block and until recently the director for technology and national security at the National Security Council.
Recommended background readings included:
- “Janet Egan and Lennart Heim on the AI Diffusion Rule,” Lawfare Daily (Jan. 17, 2025) (podcast);
- Kevin Frazier, “The Dangers of AI Sovereignty,” Lawfare (Apr. 7, 2025);
- Janet Egan and Spencer Michaels, “Five Objectives to Guide AI Diffusion,” Center for a New American Security (Apr. 29, 2025); and
- Aaron Cooper and Nikita Lalwani, “Client Alert: What’s Next for the AI Diffusion Rule,” Jenner & Block (May 21, 2025).
The discussion began with an exploration of the current landscape of AI diffusion and its implications for U.S. national security. AI diffusion refers to the global spread of various AI technologies, particularly AI chips, data models, and training infrastructure. The panel highlighted the uneven distribution of AI infrastructure, noting that the U.S., China, and the EU host the majority of the world’s advanced data centers. This disparity raises significant concerns about global power dynamics, especially as nations like China position themselves as competitors to U.S. leadership in AI.
The focus then shifted to the strategic role of export controls in regulating AI diffusion. The panel examined the Biden-era AI diffusion rule, which aimed to regulate the export of advanced AI chips to countries outside the U.S. to safeguard national security and technological leadership. However, this rule was rescinded by the Trump administration, raising important questions about the effectiveness and future of such regulatory efforts. The discussion highlighted the difficulty the U.S. faces in controlling not only AI chips, but also other critical components of the AI ecosystem, such as open-source models, algorithms, and application programming interfaces (APIs). While the U.S. holds a clear advantage in chip production, which can be more easily regulated due to the tangible and proprietary nature of these components, the broader challenge lies in the more diffuse aspects of the AI stack. APIs and data, in particular, are harder to control as they are not only more accessible but also more fluid in their distribution across borders, complicating efforts to maintain strategic advantage. The panelists underscored the complexity of balancing the need for security with the imperative to allow innovation and global collaboration in this rapidly advancing field.
AI is framed as a dual-use technology, capable of fostering both economic growth and military advancements. Panelists emphasized the transformative potential of AI in areas like intelligence gathering, autonomous weaponry, and cyber warfare. Given these developments, maintaining control over AI’s diffusion is critical for U.S. strategic interests. Concerns were raised about adversarial states, particularly China and Russia, leveraging AI technologies to gain strategic advantages. This prompted a broader discussion on how the U.S. might regulate AI in a manner that ensures national security while also fostering innovation and economic growth.
In highlighting the importance of regulating the AI ecosystem strategically, the discussion focused on three essential components: data, algorithms, and computational resources (such as AI chips and data centers). While the U.S. currently holds a strategic advantage in AI chips, international partnerships and the sharing of technology with trusted allies will be essential for maintaining global leadership. However, the challenge lies in balancing control over AI diffusion with the need for continued technological innovation in both national security and commercial sectors.
The role of Congress in overseeing AI diffusion and regulating its spread was then addressed. Panelists agreed that Congress should take an active role in shaping policy and providing adequate resources to agencies like the Bureau of Industry and Security (BIS). This agency is responsible for enforcing export controls but has been under-resourced in addressing the rapidly evolving field of AI. Strengthening BIS’s capacity to enforce export controls would better position the U.S. to maintain its competitive advantage in AI technology.
The session concluded with a call for enhanced international cooperation in regulating AI. Multilateral partnerships with countries such as Japan, South Korea, and members of the EU could contribute to a more secure global AI ecosystem. These trusted nations could work together to ensure that AI technologies are not misused. Additionally, panelists emphasized the need to continuously monitor emerging AI technologies and adapt regulatory frameworks to keep pace with advancements in the field.
The study group session then concluded with an open discussion during which attendees were invited to ask questions and comment on the issues raised during the discussion.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).