Nuclear deterrence has been a central element of American security policy since the Cold War began. The deterrence concept is straight-forward: persuade a potential adversary that the risks and costs of his proposed action far outweigh any gains that he might hope to achieve. To make deterrence credible, the United States built up powerful strategic, theater and tactical nuclear forces that could threaten any potential aggressor with the catastrophic risks and costs of a nuclear retaliatory strike against his homeland.
During the Cold War, the primary focus of this deterrent was the Soviet Union. The Soviets built their own nuclear force targeting the United States, producing a situation of mutual deterrence, often referred to as “mutual assured destruction” or MAD. Many argue that MAD worked and kept the United States and Soviet Union from an all-out war—despite the intense political, economic and ideological competition between the two—as the horrific prospect of nuclear conflict gave both strong incentives to avoid conflict. Others note that it was too often a close thing: crises, such as those over Cuba and Berlin, brought the two countries perilously close to nuclear war.
As the United States developed a post-war alliance system, the question of extended deterrence—the ability of U.S. military forces, particularly nuclear forces, to deter attack on U.S. allies and thereby reassure them—received greater attention. Extending deterrence in a credible way proved a more complicated proposition than deterring direct attack. It was entirely credible to threaten the Soviet Union with the use of nuclear weapons in response to a Soviet attack on the United States. But how could the United States make credible the threat to use nuclear weapons against the Soviet homeland in response to a Soviet attack on U.S. allies in Europe? Or, as it was often put, how could an American president credibly persuade his Soviet counterpart that he was prepared to risk Chicago for Hamburg?