Chinese President Xi Jinping and U.S. President Joe Biden agreed late in 2024 that artificial intelligence (AI) should never be empowered to decide to launch a nuclear war. The groundwork for this excellent policy decision was laid over five years of discussions at the Track II U.S.-China Dialogue on Artificial Intelligence and National Security convened by the Brookings Institution and Tsinghua University’s Center for International Security and Strategy. By examining several cases from the U.S.-Soviet rivalry during the Cold War, one can see what might have happened if AI had existed back in that period and been trusted with the job of deciding to launch nuclear weapons or to preempt an anticipated nuclear attack—and had been wrong in its decisionmaking. Given the prevailing ideas, doctrines, and procedures of the day, an AI system “trained” on that information (perhaps through the use of many imaginary scenarios that reflected the current conventional wisdom) might have decided to launch nuclear weapons, with catastrophic results.
Thankfully, in the examples I will consider—the 1962 Cuban missile crisis, the September 1983 false-alarm crisis, and the October 1983 Able Archer exercise—a human being showed greater awareness of the stakes and better common sense than the received wisdom and doctrine of the day. It would be imprudent to assume that humans will always show restraint in such situations, and that well-trained AI systems may provide useful inputs to human decisions. Nevertheless, it is sobering that when facing the real possibility of nuclear Armageddon, human beings exhibited a level of thoughtfulness and compassion that machines, trained in cold-blooded “rational” ways, might not have possessed at the time and might not possess in the future.
Three close calls with nuclear Armageddon
The Cuban missile crisis began when U.S. intelligence learned that the Soviet Union was shipping nuclear-capable missiles and tactical nuclear weapons to Cuba over the course of 1962, in an attempt to improve the nuclear balance with the United States. Even though they did not know the extent to which the Soviets already had nuclear weapons on Cuban soil, almost all of President John F. Kennedy’s advisors, including the Joint Chiefs of Staff, recommended conventional air strikes against the Soviet positions. Such strikes could easily have led to Soviet escalation, perhaps by nearby Soviet submarine commanders (armed with nuclear-tipped torpedoes) against U.S. warships, or by Soviet ground troops in Cuba (perhaps against the U.S. base at Guantanamo Bay). Kennedy opted for a combination of a naval quarantine of Cuba (to prevent any more weaponry from reaching the island by sea) and quiet backdoor diplomacy with Soviet Premier Nikita Khrushchev that included offers to remove American missiles from Turkey and to never invade Cuba. The Soviets were persuaded to take this deal, withdraw their missiles and nuclear weapons from Cuba, and halt any further military buildup on the island, then run by Fidel Castro and his government.
In the September 1983 false-alarm crisis, a single Soviet watch officer, Stanislav Petrov, saw indications from sensor systems that the United States was attacking the Soviet Union with five intercontinental ballistic missiles (ICBMs) that would detonate within perhaps 20 minutes. In fact, what the sensors had picked up were reflections of sunlight from unusual cloud formations; the sensors were not “smart” enough to recognize the reflections for what they really were. Realizing that any American attack on the Soviet Union would almost certainly be much larger—since a small attack would only provoke a Soviet retaliation and have little chance of causing meaningful damage to Soviet nuclear forces—Petrov single-handedly chose not to escalate the situation by recommending “retaliation” against the perceived American strike. Whether an AI system would have reached that same prudent conclusion, when prevailing doctrine said that any incoming attack likely required immediate retaliation, is anyone’s guess. In this case, the actual human being improvised, using instinct more than formal protocol, to arrive at the correct decision when faced with the unthinkable possibility of an actual nuclear war. Petrov’s basic human essence and character seem to have saved the day, at least in this case.
Just a couple of months later in November 1983, NATO undertook a major military exercise known as Able Archer during a very tense year in U.S.-Soviet relations. President Ronald Reagan had given his “Star Wars” speech the past March, soon after declaring the Soviet Union an evil empire; then, in September, Soviet pilots shot down Korean Air Lines Flight 007 when it mistakenly strayed over Soviet territory, killing everyone on board. The United States was also in the process of preparing to station nuclear-capable Pershing II missiles in Europe, with a very short flight time to Moscow if ever launched.
So, when NATO conducted Able Archer, Soviet leaders worried that it might be used as cover to prepare a very real attack, perhaps with the aim of decapitating the Soviet leadership. At one point in the exercise, NATO forces simulated preparing for a nuclear attack by placing dummy warheads on nuclear-capable aircraft. Soviet intelligence witnessed the preparations but could not tell, of course, that the warheads were fake. Soviet leaders thus “responded” by readying nuclear-capable systems with very real warheads of their own. American intelligence in turn witnessed those preparations—but a savvy U.S. Air Force general, Leonard Perroots, realized what was occurring and recommended to superiors that the United States should not respond by placing real warheads on its own systems. Whether doing so would have provoked one side or the other to launch a preemptive strike is anyone’s guess; however, the proximity of the weapons to each other, and mutual fears of a decapitating surprise attack, would have made any such situation extremely fraught.
Would AI have done better?
In all three cases, AI might have elected to start a nuclear war. During the Cuban missile crisis, American officials considered the Western Hemisphere to be a sanctuary from hostile powers, and the consensus view was strongly in favor of preventing any Soviet, or communist, encroachment. The year before, the United States through the CIA had attempted to work with Cuban exiles to overthrow Castro. Certainly, the positioning of Soviet nuclear weapons less than 100 miles from U.S. shores triggered prevalent American thinking about what was and was not acceptable. Since no sensors could determine the absence of Soviet nuclear warheads, a “cautious” approach based on the doctrine of the day would indeed have been to eliminate those Soviet capabilities before they could be made operational. Only a very real American president—one who had heightened cautionary instincts after witnessing combat in World War II and watching the U.S. bureaucracy make a mess out of the Bay of Pigs attack on Cuba the year before—thought otherwise. This example shows that the ban on AI starting a nuclear war should include cases in which conventional weapons might be used to strike nuclear-capable weapons or weapons systems.
With the false-warning crisis in September 1983, it took an astute individual to realize the unlikelihood that the United States was attacking with just a few warheads. Indeed, a different officer, or an AI-directed control center, would likely have assessed that the five ICBMs were attempting a decapitation strike against leadership or could otherwise have drawn the wrong conclusion about what was going on. The result might well have been a “retaliatory” strike that was in fact a first strike, and that would have likely produced a very real American nuclear response.
With Able Archer, since American officials knew that they were only conducting an exercise, and knew that the Soviets knew as much, many would have been stunned to see the Soviets put real warheads into firing position. Most might have concluded that the Soviets were using the NATO exercise as a way to dupe NATO officials into lowering their guard as the Soviet Union prepared a very real attack. AI systems trained on the prevailing doctrines and standard procedures of the day would have likely recommended at the very least an American nuclear alert. And since both superpowers had plans for massive first strikes in those days, designed to minimize the other side’s potential for a strong second strike, a situation in which both sides had nuclear weapons on the highest wartime alerts could have been very dangerous.
Yes, it is possible that very good AI might have determined restraint was warranted in these cases—and might do so in a future situation—perhaps even better than some humans would have. AI can be used as a check on human thinking and behavior. But these examples underscore how dangerous it could be to trust a machine to make the most momentous decision in human history. Xi and Biden made the right decision, and future leaders should stand by it.
-
Acknowledgements and disclosures
The author would like to thank Ryan Hass and Ryan McElveen for their assistance on this article.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
How unchecked AI could trigger a nuclear war
February 28, 2025