Sections

Commentary

Laying the groundwork for US-China AI dialogue

People walk near a sign for Sapeon, an artificial intelligence chip company, at the Mobile World Congress in Barcelona, Spain, February 27, 2024.
People walk near a sign for Sapeon, an artificial intelligence chip company, at the Mobile World Congress in Barcelona, Spain, February 27, 2024. Reuters/Bruna Casas.

Over the past five years, the Brookings Institution and the Center for International Security and Strategy at Tsinghua University (Tsinghua CISS) have convened teams of national security technology experts for an unofficial Track-II dialogue on artificial intelligence (AI) in national security. Through funding from the Berggruen Institute and the Minderoo Foundation, the two sides have met in neutral countries on a regular basis. These efforts have spanned two U.S. presidential administrations and sustained productive engagements despite intensifying rivalry in the U.S.-China relationship and a global pandemic. Since the beginning, the two teams have met quietly to explore if it is possible to gain a greater understanding of how each side reaches decisions on the employment of AI in national security systems and whether both sides might be able to agree on common boundaries around acceptable uses of AI in national security.

The dialogue has demonstrated it is possible for U.S. and Chinese experts to narrow differences and puncture myths about each side’s approach to the employment of AI in national security. Both sides have developed shared understandings, for example, that the use of AI-enabled weapons systems should comport with the principles of customary and international law, particularly the principles of distinction and proportionality. Both sides have agreed that AI-enabled weapons systems should operate under appropriate human oversight or control. They have built a common understanding that humans, and not AI-enabled systems, must maintain control over and be responsible for the final decision to use nuclear weapons. And they have identified the need for the two teams to build a glossary of AI terms to enable a precise understanding of each other’s intended meanings when discussing AI and national security.

Throughout this process, both sides have deliberately worked with discretion, in part to help lay a path for the U.S. and Chinese governments to gain confidence in dealing directly with each other at an official level on these sensitive questions. On November 15, 2023, President Joe Biden and President Xi Jinping endorsed establishing a U.S.-China dialogue on artificial intelligence. The U.S. and Chinese governments will commence this official dialogue in the coming weeks. In this context, we believe there is considerable opportunity for Track I and Track II dialogues to inform one another, broadening and deepening understanding between the world’s two most consequential AI powers.

There is no time to waste. Both countries are leading the world in developing AI-enabled national security capabilities, even as the United States maintains a notable lead over China. The pace of progress should instill a sense of urgency between both governments to narrow gaps in understanding and advance risk-reduction efforts related to AI-enabled military systems. Even as the U.S. government engages directly with Chinese counterparts, it also will need to pursue parallel efforts with allies and partners to broaden support for its vision and establish a set of shared principles for managing risks.

As governments take up this cause, the Brookings-Tsinghua CISS Track-II dialogue will continue apace. This unofficial dialogue will help identify areas ripe for U.S. and Chinese interaction, while also developing a body of public knowledge around opportunities and risks related to the employment of AI-enabled national security capabilities.

One contribution to this development of public knowledge will be the rolling publication of a glossary of AI terms. At the ninth dialogue meeting in Munich on February 18, the Brookings and Tsinghua CISS teams discussed the first six terms: weapons systems, unmanned systems, autonomy and automation, autonomous weapons systems, lethality, and human-machine interaction. The two teams did not reach for the lowest common denominator of a shared definition for these terms but rather sought to directly elucidate and compare the definitions for these terms published by their respective governments’ relevant agencies.

As U.S. and Chinese government officials pick up the baton for these talks in the coming weeks, there are several areas where their efforts could have an impact in lowering risk. Direct government-to-government conversations could usefully advance discussions around military applications of artificial intelligence. This should include a discussion of how countries can avoid unintended military escalation by ensuring AI-enabled systems maintain appropriate levels of human oversight and judgment and adhere to principles of customary and international law. It would also be productive for talks to examine areas of mutual vulnerability, such as AI-enabled systems that may threaten critical infrastructure (e.g., by facilitating cyberattacks on hospitals, dams, water treatment facilities, electrical grids, or air traffic control systems) or threaten public safety (e.g., by aiding the development and release of dangerous pathogens). Exploring these issues would enhance the prospects of identifying new confidence-building measures for military applications of AI and developing safety measures to mitigate other shared risks emanating from advanced AI systems.

As control for the development and distribution of videos increasingly shifts to machines, the U.S. and Chinese governments also will need to contend with how to manage the potential proliferation of AI-enabled misinformation and disinformation, including deepfake videos and other synthetic media that implicate each other’s domestic political systems. How can they ascertain the origin and intention of deepfake videos that interfere in elections or undermine public confidence in leaders? How should both sides instruct outputs of generative AI models to be watermarked or otherwise labeled? Where and how can the boundaries of acceptable state behavior in AI-enabled information operations be drawn? What channels should be used to clarify intentions in instances when deepfake videos or other synthetic media originating in one country begin to circulate in the other? Both the United States and China would do well to capitalize on the lessons learned from the cybersecurity domain and seek to establish clear guidelines early on for the use of AI-generated media in political processes.

The rapid advance of AI capabilities in both countries promises considerable opportunities. However, AI will also produce new risks and disruptions that, if not managed well, could be destabilizing to relations between nuclear-armed powers such as the United States and China.

Biden and Xi have opened a window to make progress in advancing practical ideas for AI governance and risk reduction. Few efforts will have a greater long-term impact on the safety and well-being of people in both countries and the world. Through this ongoing Track-II dialogue, the Brookings and Tsinghua CISS teams will support both governments and contribute to building public knowledge during this critical period. We will provide further updates on progress as warranted.