Sections

Research

AI risks from non-state actors

U.S. President Donald Trump speaks during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC.
U.S. President Donald Trump speaks during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. (Chip Somodevilla/Getty Images)

Since October 2019, the Brookings Institution’s Foreign Policy program and Tsinghua University’s Center for International Security and Strategy (CISS) have convened the U.S.-China Track II Dialogue on Artificial Intelligence and National Security. This piece was authored by members of the U.S. and Chinese delegations who participate in this dialogue. This collection of essays explores whether it would be feasible for the United States and China to coordinate to address risks of artificial intelligence (AI) misuse by non-state actors, what conditions might be needed to enable such coordination, and whether bilateral coordination would be sufficient to address the threat or if broader capacity-building and cooperation would be necessary.

Kyle Chan

How can the United States and China address emerging AI risks?

One of the most pressing challenges posed by the rise of advanced AI tools is their potential misuse by non-state actors for malicious purposes. Frontier AI models with high-performance reasoning and agentic capabilities are widely available to the public at very low cost. For several hundred dollars, users can access frontier AI models that can achieve gold-level performance on the International Math Olympiad and outperform all but the world’s best human programmers.

These powerful, rapidly improving AI tools have the potential for significant malicious uplift, which is the amplification of harmful capabilities for non-state actors, be they individuals, terrorist networks, criminal organizations, or other malicious groups. Building on remarks by Colin Kahl at the most recent U.S.-China Track II Dialogue on Artificial Intelligence and National Security, this piece outlines potential risks from AI use by non-state actors and how the United States and China might work to address these challenges.

AI tools can empower malicious actors in a wide variety of ways. They can increase the speed and scale of existing capabilities, and they can enable entirely new capabilities for individuals or groups that previously lacked the knowledge or resources to cause large-scale harm. Capabilities once limited to sophisticated military organizations and large-scale state actors may now be accessible to small, nonexpert groups or even individuals. The following are some examples of how AI systems can enhance non-state actors’ ability to cause societal harm and disruption.

  • Augmented cyberattack capabilities: AI tools can be used to rapidly scan a potential target surface, identify security vulnerabilities, and write and execute programs to exploit these vulnerabilities. Hundreds or thousands of AI agents can operate in parallel to find and target security loopholes, generating custom code on the fly to circumvent IT defenses and overwhelm system networks. An example would be an AI-enhanced cyberattack on a major utility company or bank that is difficult to shut down and destabilizes a nation’s energy or financial system.
  • Deepfakes and misinformation: AI tools can be used to generate highly realistic images, videos, or audio clips designed to mislead the public. Examples include fabricated statements by political leaders, AI-generated news articles about a fabricated crisis, and AI-driven chatbots spreading malicious rumors on social media platforms.
  • Phishing attacks: AI tools can allow malicious actors to realistically impersonate coworkers, organization members, or other personal contacts through email and other messaging platforms. AI systems can also automate large-scale personalization, generating tailored messages with a higher likelihood of success.
  • Biological and chemical weapons: AI tools can help unsophisticated non-state actors to develop biological or chemical weapons at an expert level, including sourcing for materials and procedures for manufacture in the lab.
  • Data collection and targeting: AI tools can be used by non-state actors to collect and analyze publicly available data to identify targets, map behavioral patterns, and support planning for a cyber or physical attack. Targets may include individuals, corporations, or government agencies that play a central role in critical systems, such as infrastructure.
  • Red teaming and attack planning: AI tools can be used by malicious non-state actors to conduct scenario planning, identify potential countermeasures, and generate counterstrategies. These systems can iteratively refine attack plans based on simulated responses, enabling low-skill actors to orchestrate highly coordinated and adaptive operations. For example, AI tools may be used to simulate a corporation’s cybersecurity response to a cyberattack and develop alternative strategies for circumventing these measures.
  • Autonomous weapons: Rapid improvements in autonomous perception and navigation systems may allow non-state actors to deploy small, inexpensive drones or robotics platforms capable of independently locating, tracking, and striking targets while evading detection.

AI biosecurity risks and non-state actors

Of all the potential uses of AI tools by malicious non-state actors, the development and deployment of biological weapons present a particularly concerning challenge. On the one hand, scientific advances in AI and synthetic biology have significantly expanded the range of possibilities for creating novel DNA sequences and biological organisms. New AI-enabled biological design tools allow researchers to rapidly develop and test biological constructs, pathways, and experimental designs. These advances in AI and biology can potentially accelerate the discovery of new therapeutics and treatments for long-standing medical conditions.

On the other hand, these same AI-enabled tools can empower non-state actors to deliberately engineer novel pathogens or toxins with the potential for large-scale or even global effects. AI tools without sufficient guardrails can provide step-by-step planning and instruction for non-expert users to acquire the requisite materials and manufacture potentially harmful biological agents. In particular, AI-enabled viral vector design platforms and protein engineering tools have been identified as especially high-risk tools that may empower a malicious non-state actor to design or modify pathogens in a way that enhances transmissibility or lethality. AI tools may also be used to support the dispersion of pathogens, which would otherwise be a key bottleneck for potential bioweapons.

U.S. and China: Mitigating AI risks from non-state actors

As the world’s two AI superpowers, the United States and China have a unique capacity and responsibility for mitigating the potential risks from AI use by non-state actors. The vast majority of the world’s top AI models are developed and operated by American or Chinese AI labs, and the world’s most powerful AI compute clusters are concentrated in these two countries. Moreover, both the United States and China have a shared national interest in preventing malicious AI-enabled attacks from non-state actors that might threaten their own critical infrastructure, public safety, or broader geopolitical stability. Despite intense competition between these two countries in AI and other emerging technologies, there are concrete areas where the United States and China could coordinate or even cooperate to address AI-enabled threats from non-state actors.

First, the United States and China could coordinate on a common set of safety guidelines for AI model deployment, including output guardrails for cyber, chemical, and biological uses. Such guidelines could establish shared restrictions on model behavior, identify potential high-risk use cases, and increase the likelihood that certain malicious capabilities are consistently limited across all AI systems. A set of joint guidelines would prevent “safety arbitrage,” in which non-state actors seek out the least restrictive AI systems in either country to engage in malicious activity.

Second, the United States and China could share information on threat actors that have attempted to use AI models for malicious purposes. This information could include the types of activity conducted, examples of prompts and outputs, patterns of behavior, likely intended outcomes, and the possible individuals or groups involved. By sharing even limited information about specific attempts at malicious use, AI-related government agencies and industry groups in both countries can learn from real-world cases and develop countermeasures for similar threats. In addition, such information-sharing could help identify cross-border patterns of malicious activity that might otherwise be difficult to detect within a single country. Previously, the United States and China have cooperated on law enforcement and information sharing over counterterrorism, cybercrime, and counternarcotics, albeit to a limited degree.

Third, the United States and China could establish formal and informal emergency communication channels (i.e., AI “hotlines”) to facilitate information exchange during a crisis that potentially involves non-state actors. For example, a non-state actor may attempt an attack on one country while disguising the origin of the attack to make it appear as if it came from another country (i.e., spoofing). A dedicated U.S.-China communication channel on AI would allow both sides to clarify the source of the attack and deconflict a potential crisis before escalation occurs. During non-emergency periods, this communication channel could be strengthened through regular exchanges on emerging AI risks from non-state actors.

These areas of potential U.S.-China coordination would require not only new mechanisms within the bilateral relationship but also internal institutional scaffolding within each country to provide technical and political support for these tasks. As AI systems continue to evolve, new and unexpected risks from non-state actors are likely to emerge. The United States and China have a central role to play in mitigating these risks for the sake of their own national security and for the preservation of global peace and stability in a time of rapid technological change.

Michael E. O'Hanlon

AI risks and the difficulty of arms control

Kyle Chan has written a very fine paper on how the United States and China can collaborate to address the potential dangers of malicious non-state actors misusing artificial intelligence. I will reiterate some of the concerns he raises and conclude on a broader note about AI and U.S.-China relations.

First, as I wrote for Brookings last December, I will offer a broad observation on the nature of scientific and human progress. It is one thing to predict that AI will do great net good for the human race, but quite another matter simply to assume it. The same goes for advances in microbiology and genetic engineering. Humanity should have learned by now that techno-optimism is a shaky reed on which to base hope for the future. After all, the optimism surrounding the internet and globalization at the end of the 20th century gave way to a 21st century afflicted by terrorism and great-power rivalry—just as the early 20th century’s technological advances were soon followed by the world wars. Arguments about the near inevitability of human progress seem far too optimistic. 

In the wrong hands, or in the wrong context, today’s new technologies could be very dangerous and hard to control, as Chan underscores. AI already introduces risks from disinformation, deep fakes, and advanced cyberattacks; the future dangers it poses could be much worse. AI could, and will, be used by criminals and other nefarious actors in ways that threaten many individuals in their daily lives. Again, as Chan notes, AI could also be used to devise war strategies, create swarms of autonomous offensive robotic devices acting as an attacking network, or destroy critical national infrastructure with cyberattacks. Modern microbiology could be used to engineer diseases like COVID-19, or worse, a combination of highly contagious and lethal properties. AI might assist in the development of such diseases, bringing two 21st-century signature technologies into a dangerous alliance. One of the great leaders of AI, Yoshua Bengio, worries that poorly regulated superintelligent AI could manipulate a rogue human into creating and unleashing an exceptionally lethal virus. 

Regulating AI and microbiology will be more difficult than regulating nuclear technologies or other traditional weaponry. Biological laboratories and computers lack the large, distinctive physical footprints that make arms control more straightforward. Developing a verification regimen for the Biological Weapons Convention has proven elusive, for example.

Yet the venture is not hopeless. There are some promising ideas, and I have written about them elsewhere, such as screening those who seek to access and employ advanced nucleic acid synthesis tools, as well as “societal verification” of the activities of top-tier scientists. Chan’s recommendations about China and the United States sharing information on known dangerous actors, ensuring basic safety standards for AI developed within their own countries, and otherwise promoting best practices make sense.

But will all that really be enough? Such efforts are likely to succeed in deterring or detecting illicit activity only if the world’s major powers and institutions act with shared purpose, rather than deploying advanced technologies against one another. I will therefore conclude with a provocation: within the lifetimes of most of those reading this, China and Taiwan should find a workable formula for coexistence, confederation, or commonwealth that both sides can accept.

Qi Haotian

Mitigating AI risks from non-state actors: Context, feasibility, and shared responsibility

I broadly agree with the core assessment of Chan’s article that the misuse of advanced AI by non-state actors represents a growing and transnational security challenge, and that the United States and China, as the world’s leading AI powers, bear a special responsibility to cooperatively address these risks. As Chan articulates, the potential for “malicious uplift” enabled by frontier AI systems—particularly in cyber operations, disinformation, biosecurity, and autonomous systems—underscores the urgency of moving beyond abstract risk recognition toward practical, coordinated governance responses.

For both China and the United States, preventing AI-enabled harm by malicious non-state actors is not a concession to geopolitical rivals but a matter of shared national security, societal stability, and global public interest. At the same time, effective cooperation requires a precise understanding of where real dangers emerge and how joint actions can be operationalized under conditions of strategic competition.

Clarifying the possibility of harm

The risk posed by AI falling into the hands of malicious non-state actors is best understood not as a single-dimensional threat, but as the interaction of multiple dimensions. This response suggests several: (1) the capability uplift AI provides relative to an actor’s baseline skills; (2) the scalability and speed of harm enabled by automation and parallelization; (3) the difficulty of attribution, especially in cyber and information operations; (4) the coupling of AI with other sensitive domains, such as biology, chemistry, or autonomous weapons; and (5) the coupling of AI-associated risks with governance deficits in traditional regulatory domains such as cross-judiciary law enforcement, judicial coordination, and platform accountability.

Importantly, it is the combination of these dimensions—rather than any single factor—that generates truly systemic risks. For example, AI-assisted biological design tools become most dangerous when paired with weak laboratory oversight, cross-border material access, and online dissemination of tacit knowledge. Similarly, deepfakes pose limited harm in isolation, but become destabilizing when integrated into coordinated influence operations targeting fragile political or social environments. Recognizing these combinations helps avoid alarmism while sharpening governance priorities. It is not solely “capability risk” but “contextual risk” that needs our attention. The most serious AI-enabled risks tend to emerge not at the edge of technological capability, but at the intersection of networked actors and fragmented governance regimes.

From desirability to feasibility: How cooperation might work

Chan’s article rightly points to concrete areas for U.S.-China cooperation, including safety guidelines, information-sharing, and crisis communication channels. This response suggests that feasibility depends on three conditions.

First, cooperation should be problem-oriented rather than ideology or politics-driven. It should focus on specific high-risk use cases, such as AI-enabled cyber intrusion or biosecurity misuse, which create immediate and scalable risks posed by malicious non-state actors, particularly when applied to critical infrastructure, core financial systems, government information systems, health care networks, pathogen design facilitation, laboratory process optimization, or circumvention of safety protocols. We can expect heightened risks and systemic societal harm, especially when these scenarios are combined with weak oversight and limited regulatory regimes or capacity. To mitigate these risks, we need to create policy space for pragmatic alignment without requiring broader consensus on political values or domestic development models.

Second, cooperation should be incremental and modular. These measures build trust and progression through practice rather than declaration. Stepwise progression, in a context of strategic competition, allows cooperation to be anchored in practical outcomes. Limited, well-scoped initiatives, such as parallel safety practices or targeted technical exchanges, enable China and the United States, and other stakeholders, to build confidence through repeated interactions, test assumptions, and adjust mechanisms without excessive political or security exposure. Over time, such accumulated practice can generate a foundation of working-level trust, procedure, and even culture, which are difficult to achieve through comprehensive or highly ambitious agreements at the outset. Initial steps could include parallel adoption of baseline safety practices, such as model guardrails for chemical and biological outputs, limited technical exchanges among regulators and research institutions, and joint scenario-based discussions within Track 2 frameworks.

Third, cooperation must be compatible with development needs. For the United States, China, and the rest of the world, AI remains a critical driver of economic growth, public service delivery, and social inclusion. Risk mitigation frameworks that focus narrowly on restriction or containment may unintentionally constrain legitimate innovation, exacerbate global technology asymmetries, or raise entry barriers for late-developing economies. Effective cooperation and governance mechanisms should therefore emphasize risk proportionality—preventing misuse without imposing barriers that disproportionately constrain legitimate development or widen global technology gaps. Such efforts should target clearly defined high-risk applications and misuse pathways while preserving space for lawful, beneficial, and development-oriented uses of AI. From this perspective, addressing AI risks from non-state actors should prioritize strengthening institutional capacity, regulatory coherence, and technical safeguards, rather than imposing broad limitations on capability diffusion. 

Actors and networks: Beyond a simple state-non-state divide

It is important to recognize that state and non-state domains are not insulated from one another. Technology ecosystems are networked: AI models, datasets, talent flows, cloud infrastructure, and open-source tools traverse borders and institutional boundaries. Private companies, research communities, platform providers, and even individual developers often sit at the intersection of state regulation and non-state use.

This complexity suggests that effective governance cannot rely solely on state-to-state commitments or models involving both state and non-state actors. Instead, it requires network-aware approaches that engage industry, research institutions, international organizations, and other stakeholders. A useful parallel can be found in global public health governance, where early-warning systems, information-sharing mechanisms, and professional networks linking national authorities, research laboratories, international organizations, and private actors have helped detect and respond to transnational health threats. While not directly transferable, such models demonstrate how distributed governance networks can enhance resilience, reduce response time, and mitigate escalation risks.

Development, security, and the Global South

Finally, any discussion of AI risk governance must account for the Global South’s concerns. Many developing countries face heightened exposure to AI-enabled misinformation, cybercrime, and infrastructure vulnerabilities, while lacking the regulatory and enforcement capacity to effectively respond. In some places, even relatively small malicious groups may leverage AI-enabled capabilities to generate disproportionate social disruption, with potential spillover effects on political stability both domestically and across borders. Strengthening governance capacity and regional coordination is therefore essential to ensuring that AI contributes to development rather than exacerbating fragility.

U.S.-China cooperation should contribute to global capacity-building, not technological exclusion, supporting inclusive governance frameworks, technical assistance, and shared best practices that help all countries manage AI risks. Such an approach better aligns security objectives with inclusive development and helps ensure that AI regulation and governance contributes to global stability without undermining the developmental aspirations.

In this sense, cooperation on mitigating AI risks from non-state actors can serve as a bridge between security and development, reinforcing the principle that AI governance should advance human well-being while safeguarding against harm. 

Conclusion

In sum, Chan’s article provides a valuable and timely foundation for dialogue. This response affirms its central insight. Cooperation between the United States and China on AI risks from non-state actors is not only necessary but achievable. And this cooperation needs to be grounded in realistic and contextual threat assessment, practical mechanisms, respect for development priorities, and an appreciation of the complex actor networks shaping AI’s global impact.

Zheng Lefeng

Can China-U.S. cooperation counter the proliferation of AI misuse by non-state actors?

Chan’s article offers a meaningful assessment of the security risks arising from the misuse of AI by non-state actors. He demonstrates how advanced AI dramatically lowers costs, accelerates operational speed, and reduces technical barriers to entry. He not only highlights the urgency of addressing these risks but also underscores the necessity of China-U.S. cooperation and proposes several practical avenues for mitigating AI-enabled threats from non-state actors.

Yet when examined from a broader perspective, reliance on bilateral China-U.S. cooperation alone is unlikely to be sufficient. Unlike traditional weapons of mass destruction, AI is neither a high-threshold nor a centralized technology. Its accessibility, replicability, and deep commercialization mean that the pace and scale of diffusion far exceed those of any previous strategic technology. AI security risks are therefore inherently transnational and global in nature. Effective governance may depend less on whether China and the United States can cooperate per se, and more on whether such cooperation can serve as a catalyst for a broader, multilateral non-proliferation framework to counter the AI threats from non-state actors.

Why does countering AI misuse matter? 

During the Cold War, one of the nuclear non-proliferation regime’s core objectives was to prevent nuclear weapons and related technologies from spreading to nonnuclear weapon states and non-state actors, thereby preserving global strategic stability. Although the Treaty on the Non-Proliferation of Nuclear Weapons primarily targeted state behavior, decades of practice produced an indirect yet effective governance system that constrained non-state actors’ access to nuclear capabilities.

This system rested on three pillars. First, stringent controls over critical materials and technologies significantly raised barriers to acquisition. Second, export controls, nuclear security conventions, and national obligations shifted non-proliferation responsibilities forward to the state level, requiring governments to regulate institutions, enterprises, and individuals within their jurisdictions. Third, major powers converged on a strong normative consensus that “nuclear terrorism is unacceptable,” sharply limiting the operational space available to non-state actors.

These experiences, however, cannot be directly transplanted into the AI domain. Compared with nuclear weapons, AI technologies are characterized by low-entry thresholds, dual-use applications, and pervasive commercialization. Advanced models, computing power, and data resources are widely distributed across civilian sectors, often beyond effective state oversight. As Chan notes, frontier AI systems with advanced reasoning and agentic capabilities are already accessible at a minimal cost, enabling non-state actors to exploit them for cyber operations, biological design, information manipulation, or autonomous targeting. In this context, hackers, terrorists, and criminal organizations can act as force multipliers, significantly increasing the scale, precision, and impact of their activities.

Of particular concern is the potential use of AI-enabled weapons and planning tools in terrorist operations. Such applications could dramatically increase the likelihood of indiscriminate violence, mass casualties, and systemic disruption, posing severe risks to global security and stability. Under these conditions, non-state actors are no longer peripheral disruptors of the international security order. Nor can the international community rely on quantitative limits or material controls to achieve meaningful non-proliferation. Instead, new institutional tools are required to address the distinctive AI security risks posed by non-state actors. AI systems can be deployed with minimal resources, rapidly adapted across contexts, and scaled without proportional increases in organizational capacity, while also exacerbating the challenges of attribution—thereby undermining traditional counterterrorism strategies that rely on monitoring physical materials, financial flows, and hierarchical networks.

Foundations for China-U.S. cooperation on AI security risks 

As Chan emphasizes, China and the United States—both global AI superpowers—possess unique capabilities and bear special responsibilities in mitigating AI-related risks from non-state actors. The majority of frontier AI models and computing infrastructure are concentrated in these two countries, and both face direct threats from AI-enabled cyberattacks, biosecurity risks, and information manipulation.

At the same time, intensifying strategic competition has strained bilateral relations. Political trust has eroded, technological rivalry has deepened, and broader geopolitical tensions have complicated cooperation. Nevertheless, compared with traditional high-politics domains such as military or economic affairs, cooperation on AI security risks from non-state actors enjoys several relative advantages. These advantages derive less from mutual restraint on national AI development than from shared incentives to prevent third-party misuse by limiting non-state actors’ access to high-risk AI capabilities and enabling inputs.

China and the United States face highly overlapping threat perceptions and share a common interest in preventing malicious AI use. Accordingly, this issue primarily concerns technical governance and risk mitigation rather than ideological confrontation or relative power balances. Therefore, the objective is not to constrain each other’s AI development, but to restrict non-state actors’ access to the most dangerous, destabilizing, and uncontrollable AI capabilities and enabling inputs. If the two sides can reach a minimal consensus on model safety standards, red lines for high-risk use cases, and safeguards against cyber and biological misuse, such joint leadership could introduce a degree of stability and predictability into the broader bilateral relationship, while also serving as a foundation for multilateral frameworks necessary to address global AI security risks.

How can China and the United States counter AI proliferation? 

Given the transnational nature of non-state actors and the decentralized distribution of AI resources, unilateral or purely bilateral approaches are vulnerable to “security arbitrage,” whereby malicious actors exploit the least regulated systems. AI-era non-proliferation should therefore avoid comprehensive technology denial regimes and instead prioritize preventing the most catastrophic uses through layered, multilateral cooperation.

First, China and the United States can revive intergovernmental dialogue on AI. In May 2024, the two countries held their first intergovernmental AI dialogue in Geneva. However, differing threat perceptions, policy approaches, and domestic political transitions in the United States prevented substantive outcomes. The 2025 meeting between the Chinese and U.S. presidents in Busan, South Korea, explicitly called for enhanced AI cooperation, signaling renewed political support. Both sides should seize the window of relative bilateral stabilization anticipated in 2026 to reestablish institutionalized dialogue focused specifically on risks from non-state actors and coordination on high-risk AI applications.

Second, bilateral understandings can be gradually extended into broader multilateral frameworks. At the 2024 meeting between the Chinese and U.S. presidents in Lima, Peru, the two countries reached a consensus on maintaining human control over decisions to use nuclear weapons. Although largely symbolic, this agreement demonstrated that even amid strategic competition, convergence is possible on issues involving unacceptable risks to humanity. Building on this logic, China and the United States should develop bilateral “red-line” understandings in the AI domain and promote them through regional and global forums to foster wider international consensus.

Third, both countries can work within the United Nations framework to advance global governance of AI misuse by non-state actors. Existing mechanisms—including the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance—provide an institutional foundation. China and the United States could push to explicitly integrate non-state actor risks into these discussions, clarify states’ responsibilities for preventing AI misuse and diffusion, and align AI governance with U.N. counterterrorism and international security agendas. Over time, such efforts could contribute to a broadly representative and normatively grounded AI non-proliferation regime.

Non-state actors have emerged as the most unpredictable and least ignorable variable in AI security risks. China-U.S. cooperation is a necessary—but far from sufficient—condition for addressing this challenge. In an era of rapid AI advancement and diffusion, effective governance must evolve from bilateral coordination to multilateral institutionalization, and from abstract risk awareness to concrete nonproliferation mechanisms. Whether China and the United States can seize a window to take meaningful steps in this direction will not only shape their own security but also profoundly influence the future trajectory of global AI security governance.

Authors

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).