Sections

Commentary

The implications of the AI boom for nonstate armed actors

An Exxon station is seen out of gas after a cyberattack crippled the biggest fuel pipeline in the country, run by Colonial Pipeline, in Washington, U.S., May 15, 2021.
An Exxon station is seen out of gas after a cyberattack crippled the biggest fuel pipeline in the country, run by Colonial Pipeline, in Washington, U.S., May 15, 2021. (REUTERS/Yuri Gripas)
Editor's note:

This piece is part of a series titled “Nonstate armed actors and illicit economies in 2024” from Brookings’s Initiative on Nonstate Armed Actors.

Since the viral launch of ChatGPT, artificial intelligence (AI), and in particular generative AI, has dominated headlines and shaped policy debates. How might this boom be exploited by criminal organizations, terrorist groups, and other harmful actors in 2024? And what can policymakers and law enforcement personnel do to stem their malicious uses?

Generative AI not only offers new fuel for disinformation campaigns, recruitment, extortion, and intelligence but also lowers the technical threshold required for a range of actions that until recently did not comprise nonstate armed actors’ major capabilities, such as cyber espionage and cyberattacks. Beyond generative AI tools, AI-enabled technologies could also be leveraged by nonstate armed actors to predict the movements of law enforcement and military personnel and broadly improve the efficiency of operations, among other possibilities.

As policymakers weigh legislation, it will be important to consider misuses by nonstate actors who will benefit from the efficiency gains of AI and are unlikely to adhere to established safeguards designed to mitigate potential harms. At the national level, legislation should focus on the harms of different systems or models — and not just the size — as an indicator of risk. It should also incorporate auditing processes that assess whether the risks of open sourcing a model outweigh its potential benefits. At the international level, it will be critical to find consensus around some common standards with nations whose governance norms may diverge from the United States and other democratic partners.

Although AI can amplify and accelerate malicious actions by nonstate armed actors, it can also be leveraged by law enforcement and military personnel to better detect potential harms. Further investments in these capabilities, and collaboration between U.S. personnel and less well-equipped law enforcement agencies to responsibly deploy these tools, could help to stem their harmful missuses. However, their success will depend on how effectively security personnel utilize these new technologies vis-à-vis criminal actors.

The information space and generative AI

Generative AI refers to a class of artificial intelligence that can “learn” patterns from training data to generate new outputs. Although this research has been around for decades, advances in machine learning, more readily available data, improved computer processing power, and large financial investments have led to rapid progress in the quality of generative AI outputs.

Generative AI tools have the potential to bolster disinformation, recruitment, and intelligence efforts of nonstate armed actors. Generated text, video, and audio outputs devoid of linguistic errors could accelerate the creation of a “fog of war” to confuse or distract targets and inject more misleading content into the information space. This could include efforts to spoof or sow divisions among law enforcement personnel, disguise smuggling routes, or exaggerate battlefield successes, among other possibilities.

These tools could also enable better propaganda for recruitment. This content could create convincing “evidence” of wrongdoings as justification for extremist acts, allow for greater personalization of recruitment content, or fabricate the so-called “pull” factors of group participation.

Given that large-language models operate somewhat like search engines on steroids, they could be used to mine vast troves of information for instructions or guidance, for example, on how to more effectively conduct ambushes or develop harmful, homemade weaponry. More broadly, predictive AI could be used to calculate how much and what types of weaponry or personnel are needed for an assault based on a set of certain inputs gleaned from broader intelligence, thereby lowering the tactical advantages afforded to state actors.

Cybercrimes and generative AI

Thus far, nonstate armed actors have not been viewed as major offensive threats in cyberspace. Instead, the United States and its allies have focused on state actors. However, due to the asymmetric nature of cyber conflict, there is evidence that nonstate armed actors have begun to make inroads into this domain. For example, Hamas has used cyber espionage to glean intelligence from the Israeli government and opposition factions. Boko Haram hacked a Nigerian government database to collect private information about spy agency personnel. And drug cartels in Mexico have utilized cutting-edge spyware to track journalists and nonprofit workers.

Generative AI can accelerate cybercrimes tied to extortion, cyber-espionage, and cyberattacks by lowering the technical and personnel costs associated with these skillsets. For example, deepfake videos may be used by criminal groups to fabricate evidence of illegal or compromising behavior for extortion, including through the production of nonconsensual deepfakes.

Generative AI will also make spearfishing campaigns harder to detect, more customizable, and more scalable. In the past, linguistic errors often increased these campaigns’ detectability, but generative AI tools offer new ways to craft messages devoid of mistakes. Another area for abuse is voice cloning, which allows for more convincing fraud efforts designed to steal sensitive information or commit other cybercrimes. Where synthetic audio once sounded mechanical, voice cloning can create outputs that mimic real speech patterns or even real people.

Finally, generative AI could lower the technical threshold required to commit destabilizing attacks on critical infrastructure, like hospitals or electric grids. There is some evidence, for example, that the ransomware attack on the Colonial Pipeline was spurred by a nonstate group operating out of the former Soviet Union but not in collaboration with Russia. As this kind of attack becomes less technically challenging, traditional nonstate actors who rely more on armed violence to achieve political objectives — rather than financial ones — may view this engagement as an attractive option.

Open-sourced vs. proprietary generative AI

To address these challenges, some tech companies have implemented safeguards to curb the malicious use of their products. For example, ChatGPT refuses to generate responses that might violate the law or promote abusive behavior. However, users have repeatedly found workarounds to remove these constraints. While permeable, these guardrails are possible due to the proprietary nature of ChatGPT’s underlying model, which remains closed to public scrutiny and harmful adaptations. However, other foundation models are open sourced, meaning that the underlying model (or some aspects of it) is publicly available for adaptation. This allows not only for transparency, open collaboration, and improvements but also for systems to be fine-tuned for malicious use.

Take, for example, the GPT-J language model, an open-source alternative to OpenAI’s GPT model that underpins ChatGPT. This model may have already been adapted to write code that assists with cyberattacks, identifies and exploits software vulnerabilities, and steals credit card information, among other harms. These tools — known as WormGPT and FraudGPT — are readily available for purchase on the dark web. While they may not be as sophisticated as current leading models, they have the potential to enable criminal activity and cyber conflict.

Toward a workable solution

The solution, however, is not as simple as eliminating foundation models’ open sourcing. In fact, in cyberspace more broadly, open collaboration has played a critical role in identifying and thwarting cyberattacks. In the domain of AI, this openness could propel innovation by leveraging community inputs to become safer. Yet making models open source, and as a result highly adaptable, may also pose significant risks.

As regulators in the United States continue to weigh the merits of a more open system, it will be critical to develop a set of standards or best practices prior to the deployment of any model to assess whether the risks of releasing information about it outweigh its potential benefits. It will also be important to continue to audit these systems following their release.

In addition, because it is difficult to ascertain a foundation model’s risk in advance, U.S. and European Union policymakers have used the model’s size — in terms of compute capacity, for example — as a proxy for risk. However, any overarching legislation should not solely focus on model size as an indicator of harm. Models such as FraudGPT would fall below this threshold, despite their potential for significant damage.

Researchers could also leverage AI to develop tools for detecting fraudulent activity, patching software vulnerabilities, exposing online recruitment networks, or even identifying criminal actors, among a myriad of possibilities. Law enforcement and military personnel around the world could incorporate these tools into their efforts to monitor and combat criminal activities such as money laundering.

As these systems become incorporated into security personnel’s toolkits, it will be critical that they are utilized in a way that respects human rights and maintains human control of AI systems, particularly where decisionmaking is involved. As such, where U.S. government collaboration with local security personnel is possible, proper oversight and binding contracts to ensure the responsible usage of these systems will be critical. This is particularly true in contexts where corruption is high.

These types of agreements, however, may be insufficient due to the fact that AI-enabled technologies and tools are actively under development in countries with divergent norms, including those related to data privacy and surveillance. As a result, international cooperation, and some agreement over shared standards for AI governance, will be vital. This is particularly important as non-democratic nations continue to invest in and export AI-enabled tools around the world.