Sections

Commentary

A roadmap for a US-China AI dialogue

Blue illuminated AI processor on the motherboard.
Blue illuminated AI processor on a motherboard. (IMAGO/Westlight via Reuters Connect)

When President Joe Biden and General Secretary Xi Jinping met in California in November 2023, their governments announced a new bilateral channel for consultation on artificial intelligence (AI). If both governments scope this effort wisely and focus on several concrete, tractable issues, they may have an opportunity to make lasting progress in reducing risks and building consensus around the governance of emerging technologies. If they fail to coalesce around common objectives, though, they risk creating another forum for ritual airing of grievances. This window of opportunity may be fleeting, so they must use it purposively.

What makes it so challenging for the two governments to, as Biden put it, “get our experts together to discuss risk and safety issues associated with artificial intelligence” is that the specific problem at hand is not widely agreed on between or even within the two countries. Indeed, while the White House readout said Biden and Xi “affirmed the need to address the risks of advanced AI systems and improve AI safety through U.S.-China government talks,” China’s official Xinhua News Agency was more circumspect, writing that “establishing a dialogue between governments on artificial intelligence” was one area in which the two leaders agreed to enhance cooperation.

Scoping an AI dialogue is difficult because, in many U.S.-China engagements on the topic, “AI” does not mean anything specific. It means everything from self-driving cars and autonomous weapons to facial recognition, face-swapping apps, ChatGPT, and a potential robot apocalypse. What’s at issue in this dialogue, then, depends on which present and future technologies and applications are on the agenda. AI might remain an umbrella term, but to make progress, officials will need to select specific topics and problems where the United States and China could reduce risk and capture benefits while setting aside intractable issues and nebulous concerns.

This isn’t the first policy moment for AI

The United States and China are not actually new at this. Over the past year, large language models (LLMs) such as OpenAI’s ChatGPT triggered a huge surge in public and policy attention and research on AI. Even so, the U.S. and Chinese policy communities have long been mulling automation’s possibilities and perils. In 2016, the Obama White House released a report on “Preparing for the Future of Artificial Intelligence,” which focused on how to regulate and capture the benefits of the era’s machine-learning techniques. In 2017, China’s State Council issued the “New Generation Artificial Intelligence Development Plan,” focusing on fostering development in data-driven automation while avoiding risks to social, economic, and political stability. Both documents reflected high-level official recognition that AI could be hugely useful while also posing new risks.

More than half a decade later, amidst hype about LLMs, the United Kingdom hosted an AI Safety Summit in November 2023, a major conference with significant U.S. and Chinese government and industry participation. There too, the “Bletchley Declaration” recognized both the potentials and risks “at the ‘frontier’ of AI.” The declaration followed efforts from Washington and Beijing to manage the development and application of automation. These included voluntary commitments from major companies in the United States on transparency and security around generative AI systems, and regulations in China governing recommendation algorithms and companies offering synthetic media or generative AI as a service.

Let’s get specific

Neither government is starting from zero on the implications of machine-learning models that rely on large pools of data, deftly deployed math, and thousands or millions of processor hours. Yet what specifically is there for these two governments to discuss in their new channel? Even if official AI talks have not occurred, we’re years into U.S.-China interaction around data-driven automation. What’s different this time is that individual machine-learning models have moved from being relatively built-to-purpose (i.e., useful for a narrow range of tasks) to being increasingly general-purpose, such that some of them provide useful (or seemingly useful) outputs for a wide and sometimes unknowable array of queries. For thinkers who dream of “artificial general intelligence” radically improving the human condition, or those who fear it will be our doom, this has been a milestone.

U.S. and Chinese citizens, and people around the world, have a lot at stake as these techniques are applied. U.S. and Chinese companies and researchers are among the leading innovators in the area, and they have access to some of the deepest stores of data and most extensive computational resources. Along with major early movers in the European Union, their governance efforts are likely to be models for other nations. Yet U.S.-China bilateral competition and distrust are intensifying, and risks are concurrently rising that common perils will not be met with the most effective responses, that uses of advanced models will produce unpredictable and destabilizing results, and that opportunities to meaningfully improve the human condition will be squandered.

To hit the sweet spot for achieving ambitious but attainable progress, U.S. and Chinese officials should prioritize three baskets of issues: military uses of AI, enabling positive cooperation, and keeping focused on the realm of the possible.

Military uses of AI. Arguably the most difficult automation-driven challenge for these governments to discuss is how the technology will impact U.S.-China strategic stability, which we refer to here as a state of mutual vulnerability. Capabilities that can be understood as forms of AI already are embedded in both countries’ military systems: Autopilot in aircraft, computer vision in targeting systems, and pattern recognition in intelligence analysis are just some examples. The challenge, therefore, is not to swear off military applications of AI, which is unachievable, but rather to begin building boundaries and common expectations around acceptable military uses of automation. For example, both countries should work toward common expectations of rigor around testing and evaluation before AI-enabled systems are put into the field. They also should reach for tangible early wins, such as a common understanding that only humans can make nuclear launch decisions and that such decisions should never be automated.

Enabling necessary data sharing. Cutting-edge AI systems today require large amounts of data, which can be both an economic asset (a factor of production in Chinese terms) and a potential security vulnerability if accessed by rival governments. China’s Data Security Law is heavily focused on such risks, and U.S. legislators are increasingly focused on the national and economic security implications of Chinese access to U.S. data. It should be possible, for example, to address climate risks or disease through sharing data between the United States and China, but security concerns about remote sensing data or human genomes could stymie progress. There are potential solutions, however. In industry, where personal information is regulated, privacy-enhancing technologies can enable the use of data for model training without revealing to the developer the underlying, potentially sensitive information. Yet governments would need to trust such techniques for them to be confident in allowing sensitive data to be shared. Assembling experts and devoting resources to develop standards for data sharing vetted by both governments could be immensely powerful, especially if focusing on a few specific sectors where data protection concerns are found to be a blocking factor.

Common challenges, trusted standards. The U.S. and Chinese governments are both concerned with how to label or watermark the outputs of generative AI models. The technical and policy solutions for information integrity are nascent, and they will have limited impact if adopted one country at a time. U.S. and Chinese expert teams could explore ways to make such systems maximally compatible, likely in collaboration with industry and global standards organizations.

Staying focused. Perhaps the biggest potential pitfall of a new bilateral AI channel is the temptation to swerve into intractable issues that are AI-related but are not what makes this such a crucial opportunity. For example, the Chinese government is unhappy with the U.S. government’s use of export controls to block Chinese access to the most advanced graphics processing units (GPUs), which for now makes computing power for large model training scarcer for Chinese actors. Yet, given the Biden administration’s firm national security focus on limiting China’s access to GPUs, any efforts to use this dialogue to adjudicate export controls would be a nonstarter. Similarly, U.S. officials have concerns about China’s use of machine learning for domestic surveillance. Any U.S. effort to use this dialogue to condemn China’s domestic governance practices, however justified, would push the dialogue into a cul-de-sac.

New general-purpose and specialized machine-learning models will have a deep and uncharted impact on the United States and China in the coming years. Both countries are at the forefront of developing these tools and being impacted by them. The present opportunity does not erase zero-sum, or even lose-lose, possibilities as the two countries navigate competition and uncertainty. However, focusing on advancing practical ideas for governance and risk reduction will maximize the odds of seizing the opportunity that both leaders have opened. The window for progress will not remain open forever, though, so early wins will be essential to underscoring the value of investing in this dialogue for both sides.

Authors