Twenty years ago, I sat in a classroom on the Upper West Side of Manhattan with around 40 peers. We were concentration-shopping — the ritual through which students at Columbia University’s School of International and Public Affairs (SIPA) received presentations by faculty explaining the various courses of study available to them. This particular session was devoted to international security policy, and the content was being delivered by its then-director, the venerable Dick Betts. I remember very clearly watching Betts step behind the lectern to make the following statement with frank and unsentimental seriousness: “You may not be interested in war. But war is interested in you.” Some number of people tittered at what I presume they found to be the melodrama in the moment; the following Tuesday was September 11, 2001.
Betts’ caution reminds us that violent conflict has been a feature of all of human history and so we ought to understand it and to prepare for it. It also alerts us that war is opportunistic, that it can start and spread as a result of accidental events, seemingly mundane decisions, and a conspiracy of circumstances that arise when we are arrogant or inattentive.
The United States’ 20-year-long intervention in Afghanistan is a painful example of a failure to heed Betts’ warning in both ways. It is evidence that the United States has yet to let go of a mistaken set of beliefs about what brute force war can, and cannot, achieve. And, too, it is difficult to review the chronology of choices the U.S. government made in Afghanistan without seeing a steady accumulation of opportunities for the war to expand in space, to change in character, and to extend over time. This is not mission-creep — it is war doing what war does.
The post-Cold War environment was permissive of these misunderstandings and missteps, and relatively lenient in its imposition of costs on the United States. Those days are over. The consequences of misapplying brute force, and of neglecting or mistaking how to actively guard against war, will be harsh and unforgiving during a period of aggressive competition among powerful states.
This change is not lost on the defense community today, and there is a considerable amount of attention being given to the conjoined matters of how to prepare for great power war and how to prevent it. Both concerns arise specifically from worries about China’s intentions over Taiwan and about Russia’s designs on NATO’s eastern front. These scenarios have brought deterrent strategies back into fashion, and momentum is gathering behind the idea that deterrence is best achieved by amassing high-tech conventional warfighting superiority.
History informs us, however, that deterrence is never as logical, straightforward, and simple in practice as it seems it should be. Machine learning, autonomy, hypersonics, and other advanced technologies doubtless should and will be brought into military use but doing so doesn’t ensure deterrent effect, and neither is seeking to deter China and Russia selfsame as guarding against war. To the contrary, to the extent that the United States does not adequately account for how its own approach to integrating these technologies into its deterrent strategies can cause misperception, instability, and miscalculation, it does not minimize opportunity for war — it creates it.
This is never more true than during periods of flux and transition, when policymakers are subjected to the unsteadying influences of broken patterns, unusual events, and an inability to discern trends or anticipate future trajectories. Periods of rapid and pronounced technological change make deterrence especially tricky, as the possible applications of new tools used in new ways are many and their implications uncertain.
All of these dynamics are not just present but pronounced today, and we should be wary of their interaction with our defense strategy. Enhancing our warfighting capabilities with advanced technologies doesn’t guarantee deterrent success, and it won’t improve our skill at distinguishing between what brute force can and can’t achieve. It might, in fact, degrade it — having a fancier hammer, after all, might tempt us to see more nails rather than fewer, or to believe the hammer can do a scalpel’s job. So too might the promise of emerging technologies lure us into making a series of choices that individually seem innocuous under the banner of deterrence but that accrue to create mistrust and foment international instability.
An era of great power competition will not accommodate confusion about the uses and limitations of brute force, nor will it treat gently decisions made from ego, optimism, or wishful thinking. Wise defense strategy must now emerge from worst-case scenario planning, where strategies are based not on assumptions about what deters war but rather are tested for failures that might cause it. Emerging technologies cannot be pursued as panaceas, but must be chosen selectively to balance their benefits to us with the risks and threats they pose to others and to international stability. To do otherwise is either to forget or to willfully neglect the lessons of Afghanistan and to believe, mistakenly, that we can manage the consequences of war once again being interested in us.