Sections

Commentary

Can the US and China cooperate on AI?

People stand next to a humanoid robot from Unitree Robotics during the Global Developer Conference, organized by the Shanghai AI Industry Association, in Shanghai on February 21, 2025.
People stand next to a humanoid robot from Unitree Robotics during the Global Developer Conference, organized by the Shanghai AI Industry Association, in Shanghai on February 21, 2025. (Photo by HECTOR RETAMAL/AFP via Getty Images)

President Donald Trump will travel to Beijing for meetings with President Xi Jinping on May 14-15. Secretary of the Treasury Scott Bessent and his team have been given responsibility for preparing for Trump’s trip. In advance of the trip, Treasury officials have signaled publicly that Trump and Xi plan to discuss artificial intelligence (AI), and specifically, areas of mutual cooperation on “security and threats from nonstate actors.”

To provide context on the significance of this upcoming meeting and its implications for how the United States and China will relate to each other on AI, Brookings China Center Director Ryan Hass conducted a written interview with China Center Fellow Kyle Chan. Chan is an expert on China’s technology development and industrial policy as well as U.S.-China relations.     

Ryan Hass:
Why is it important now for the United States to engage China directly on addressing risks posed by malicious use of AI by nonstate actors?

Kyle Chan:
AI models are becoming powerful enough to create serious national security risks. In the wrong hands, they could help malicious actors launch cyberattacks, target critical infrastructure, or develop biological weapons. The United States and China are both potential targets for such attacks. Both nations share a common national security interest in preventing AI-enabled attacks from terrorist groups, criminal networks, and other nonstate actors.

As the developers of the world’s most advanced AI systems, the United States and China are uniquely positioned to address these risks. This does not require broad trust, strategic alignment, or compromise on national interests. The United States and China can continue to compete vigorously in AI while taking practical steps to reduce shared risks.

Ryan Hass:
What concrete actions can the United States and China take to reduce AI risks?

Kyle Chan:
First, nonbinding AI guidelines: The United States and China should establish a common set of nonbinding safety guidelines for deploying frontier AI models. These guidelines could include guardrails for cyber, chemical, and biological assistance; shared definitions of high-risk use cases; and baseline restrictions on dangerous model behavior. Common standards would help prevent “safety arbitrage,” where malicious actors simply seek out the least restrictive model available in either country.

Second, limited information sharing: The United States and China should share limited information about the attempted misuse of AI systems. This could include examples of prompts, outputs, behavioral patterns, suspected objectives, and categories of threat actors. Even narrow information-sharing would help government agencies and AI companies identify recurring tactics, improve detection systems, and build countermeasures against real-world threats.

Third, an AI emergency hotline: The United States and China should establish formal and informal emergency communication channels for AI-related incidents. During a live crisis or AI-enabled attack, such channels would help both sides quickly share information, clarify attribution, and reduce the risk of miscalculation. Where interests overlap, such channels could even allow both governments to coordinate responses.

Ryan Hass:
What do you anticipate will be the top issues or concerns on Xi’s mind when he and Trump discuss AI?

Kyle Chan:
Global AI governance: Xi wants to position China as a leader in shaping global AI governance, from technical standards to risk reduction. Xi may see this summit with Trump as an opportunity to signal to the world that the United States and China are working together on AI. For the United States, this meeting offers an opportunity to reassert American leadership in global AI governance. As the technological frontrunner in AI, the United States should play a central role in shaping the rules, standards, and norms that govern this technology, ensuring that they reflect U.S. interests, protect national security, and prevent China from defining the terms of AI governance on its own.

Military and AI: China is closely watching how the U.S. military is using AI in recent conflicts, even as Beijing accelerates its own development of AI-enabled military capabilities. The danger is an AI arms race in which both countries feel pressure to integrate AI into military operations as quickly as possible, potentially at the expense of safety precautions and human oversight. At this stage, neither side is likely to accept binding limits that could constrain its own capabilities or create a perceived strategic disadvantage. But Xi may still seek an official dialogue on military uses of AI to reduce the risk of miscalculation, unintended escalation, or loss of control.

Putting a floor on AI competition: The United States and its allies have imposed a broad set of technology controls on China, including export controls on advanced AI chips and chipmaking equipment. The United States has also sought to discourage third countries from relying on Chinese AI hardware, including Huawei’s Ascend chips, by raising sanctions risks. At the upcoming summit, Xi may seek to establish a floor on what he sees as American efforts to limit China’s AI development and global AI expansion. The goal would not be to end AI competition but to prevent it from escalating into a broader containment effort.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).