Artificial intelligence (AI) will affect human society in countless ways, including in the realm of international security. However, I would like to focus on one specific matter: the possibility that AI may increase the risks of war breaking out. Specifically, I would like to offer a caution or warning to leaders in the United States, the People’s Republic of China, and everywhere around the world, in fact.
The caution that I offer is based on my reading of history. Specifically, wars often occur when the aggressor develops a theory of rapid and relatively easy victory. Often, such a theory is based in part on a new technology or set of technologies that the aggressor has developed and incorporated into its armed forces as well as its war plans. Believing it has acquired such a capability sooner than its adversary, the aggressor then attacks with an expectation of an easy victory. That could happen in the future with AI. Since AI can accelerate the pace of war, as well as attacks on command-and-control systems, it may be particularly prone to this danger; it could, for example, offer the perception of a possible path to quick decapitation of adversary leadership.
Take for example World War I. Benefiting from a marvelous railroad system and a world-class industrial base that had pumped out lots of weaponry, Germany became overconfident. Its infamous Schlieffen Plan for rapidly defeating France, then repurposing forces to fight Russia, was the result of this overconfidence. Political and military leaders convinced themselves that they had figured out a formula for rapid conquest of the enemy. Four years and 10 million fatalities later, Germany lost the war—and its country nearly collapsed.
Of course, some wars start without any expectation of quick success on the part of the initiator. Some causes are just and worth the sacrifice. Some pathological leaders like Adolf Hitler of Nazi Germany in the 1930s are willing to fight hard and long because they lack ethics, compassion, and decency. But after his success with the blitzkrieg against France, even Hitler developed an unrealistic expectation of rapid success before attacking the Soviet Union in the summer of 1941 (that’s why he sent German troops deep into Russia without winter coats). By that point in World War II, he had developed such pride in German technology and martial prowess that he thought he could defeat virtually anyone.
Hitler wasn’t the only one who was wrong about the possibility of rapid conquest during World War II. Theorists of airpower like the Italian Giulio Douhet and the American Billy Mitchell believed that strategic bombing (with conventional or nuclear weapons) would quickly terrorize a population into submission. That did not happen when Germany bombed Britain in “the blitz” or when the United States and its allies bombed first Germany and then Japan.
Or consider the Korean War. First, the North Koreans were overconfident, and then so were the Americans. Maybe then the Chinese were too; I’m not sure. And it happened again in Vietnam, where the United States expected its high-technology edge, with helicopter mobility and strategic bombardment, to defeat the Viet Cong. We were wrong.
The same kind of thing happened again when the United States invaded Iraq in 2003 to overthrow Saddam Hussein: the Bush administration hoped that “shock and awe” from precision strike technologies foreshadowed a “cakewalk.” There did not occur, of course—even if the first part of the war, overthrowing Saddam, went fast.
Vladimir Putin made a similar mistake when invading Ukraine in 2022—perhaps based on the false expectation that it would go almost as easily as his clever seizure of Crimea with quickness, boldness, and “little green men” in 2014.
Yes, rapid victories do sometimes happen. But often they don’t—and then it’s too late to undo the war or pretend it never happened. Usually, in such cases, the resulting wars wind up dragging on for a long time. The great Chinese strategist Sun Tzu may have been right to argue that winning without fighting, or with only minimal fighting, is the best kind of victory. But most nations are not able to achieve such happy outcomes in war.
We need to remind ourselves that some partial advantage in AI in the future—whoever may believe they attain it—will not make future war easy or rapid victory predictable. That is not usually the nature of war.
Commentary
Artificial intelligence, international security, and the risk of war
November 19, 2024