Sections

Research

The implications of artificial intelligence for national security strategy

Personnel work at the Air Force Space Command Network Operations & Security Center at Peterson Air Force Base in Colorado Springs, Colorado July 20, 2010. U.S. national security planners are proposing that the 21st century's critical infrastructure -- power grids, communications, water utilities, financial networks -- be similarly shielded from cyber marauders and other foes. The ramparts would be virtual, their perimeters policed by the Pentagon and backed by digital weapons capable of circling the globe in milliseconds to knock out targets.  To match Special Report  USA-CYBERWAR/          REUTERS/Rick Wilking (UNITED STATES - Tags: MILITARY SCI TECH POLITICS) - GM1E6A51SA301
Editor's note:

This report is part of “A Blueprint for the Future of AI,” a series from the Brookings Institution that analyzes the new challenges and potential policy solutions introduced by artificial intelligence and other emerging technologies.

Artificial intelligence is transforming the world, as Brookings President John Allen and Vice President Darrell West describe in their thoughtful piece on this topic. From how we educate our youth to how economies operate, there exists no shortage of arenas where experts believe artificial intelligence will have an outsized impact. National security strategy is among them.

To date, most discussions of artificial intelligence’s impact on national security strategy have largely focused on the operational level of war. This includes how future wars will be influenced by new military capabilities, and how those capabilities will, in turn, influence conflict on the battlefield. A substantial part of this dialogue at the operational level considers how artificial intelligence will influence ethics in national security, particularly the role played by decision-makers and how much autonomy they have in employing force, and how much they delegate to a machine.

Much less attention, however, has been devoted to the strategic level of national security. Ultimately, national security decision-makers must grapple with the core dilemma of when, where, why, how, and under what circumstances to harness national power. If “artificial intelligence is actually the ultimate enabler,” as Michael Horowitz argues, then its impact in enabling—or to the contrary, impeding—national security strategy requires serious examination. Doing so, however, elicits more questions than answers.

Formulating national security strategy

The national security strategy trinity—composed of ends, ways, and means—is a useful framework for understanding security objectives, how they will be fulfilled, and the resources available for doing so. This framework is iterative, as shifts along any one of its vertices influences the entire effort. At a more granular level, national security strategy formulation has three primary thrusts: diagnosis, decision-making, and assessment. Whether setting the broad vision of U.S. “interests, goals, and objectives” or considering specific near-term efforts to harness the “political, economic, military, and other elements of the United States’s national power,” as Congress requires of every new administration, setting national security strategy nevertheless involve some degree of energy and effort in this vein.Diagnosis focuses on understanding the strategic landscape as it exists and considering the trajectories it might take in the future. It requires deep and textured knowledge of global and regional trends. For example, how has power shifted between the United States and China over the last decade and in what ways might it do so in the near- to mid-term? How does the Iranian Supreme Leader view his country’s opportunities and challenges in the Middle East?

Decisionmaking requires answering the colossal strategic questions of employing national power in support of national interests and values. Who should the United States fight? Why should it do so? Over which issues? How should it fight? What constitutes victory and what constitutes defeat? For example, should the United States use military force if Russian proxies attack Eastern Europe and if so, how, where, and in what ways?

Assessment involves periodically revisiting previous diagnoses and decisions to ascertain how and in what ways the projected strategic landscapes have changed. For example, did power shift between the United States and China as expected? Did the Iranian Supreme Leader take regional steps as we thought? How and in what ways is the U.S. military’s conflict with Russian proxies playing out?

AI’s (potential) impact

Artificial intelligence can influence national security strategy formulation in any number of ways. It provides both opportunities and challenges to decisionmakers—many of which remain unknown. An illustrative, but far from exhaustive, survey includes the following areas:

Who makes national security strategy

Going forward, artificial intelligence could influence who joins and succeeds in the national security profession, how familiar they are with what machines can and cannot tell us, and how responsible oversight is conducted. One analogy worth considering is how the national security profession has dealt with nuclear weapons. In that area, a small cohort of experts, composed of what is teasingly referred to as the priesthood, has helped foment the belief that policymakers must become deep experts and scale massive barriers to entry in order to meaningfully contribute to decisions on this topic.

Given the expected overwhelming influence of artificial intelligence in broader national security affairs, this analogy portends real problems if it comes to fruition. It portends these problems not only because having a select cohort who has scaled similar barriers engaging on issues invariably provides a limited perspective. Artificial intelligence is not just the object of a decision, but it may also assist in making decisions. Simply put, artificial intelligence may help national security policymakers decide whether and if it should even be employed in the decision at hand. And, given the private sector’s involvement in digital technology, the Defense Department’s consideration of, and ability to execute, its decisions on “Who should the United States fight? Why should it do so? Over which issues? How should it fight?” may largely depend on its relationship with the private sector.

It is easy to say that humans—not machines—will make the important decisions in an artificial intelligence-infused national security world. Perhaps it is also lazy. Artificial intelligence will influence the management, employment, and development of military force; that goes beyond swarms of weapons to better targeting of adversaries to offering decisionmakers new and different options in conflict. While the Defense Department has pledged humans will always make the ultimate decision about killing another human being, there are nevertheless serious questions about what that means if artificial intelligence can enable a weapons system that can “independently compose and select among alternative courses of action to accomplish goals based on its knowledge and understanding of the world, of itself, and of the local, dynamic context.”

It is easy to say that humans—not machines—will make the important decisions in an artificial intelligence-infused national security world. Perhaps it is also lazy.

Moreover, that last issue—context—is particularly relevant since policymakers will be more inclined to empower machines to make decisions under some circumstances rather than others. For example, increasingly delegated command and control for a weapon seeking out a maritime platform during a high-end conventional war between the United States and China in the Taiwan Strait is wholly distinct from doing so in the context of a targeted killing in Pakistan. When speed plays an outsized role, such as in a missile defense scenario, or when connectivity is limited so a system cannot consult a human for additional guidance, decisionmaking authority may be further devolved down. To put a Hollywood spin on it, national security policymakers will be more comfortable deferring decisions to R2D2 or to Short Circuit rather than the Terminator. Detaching operational level employment of artificial intelligence capabilities from the strategic level dangerously dismisses how operators will formulate military options intertwined with these tools. Above all, national security policymakers must be cautious of satisfying themselves with the false antidote of ultimate control. Indeed, as a deputy assistant secretary, I oversaw a review of Defense Department policy on autonomy in weapons systems and throughout that process, often found myself—and others—clinging to a past that may no longer resemble the future.

How national security strategy is made

The increasing capability of artificial intelligence will influence all three phases of national security strategy formulation: diagnosis, decisionmaking, and assessment. Indeed, it likely will both facilitate and impede them. By unearthing and filtering through a surfeit of information, decisionmakers will have more detail than ever before imagined on a wide range of subjects, ranging from permutations in the security environment to shifting adversary military capabilities and perceptions. There is a powerful inertia that plagues national security decisionmaking in the United States, no doubt tied to the distribution of power in a system deliberately designed to constrain action, and an abundance of information will not necessarily overcome that dynamic. Rather, it could lead to further indecision, micromanagement, or analysis by paralysis.

Bias is the modus operandi of national security strategy in all three phases. The war in Afghanistan can provide any number of examples worth exploring. Detached from emotions, artificial intelligence can unearth trends that show the war in Afghanistan is rife with sunk costs, despite 17 years of substantial U.S. military force. It can test various hypotheses about the U.S. theory of victory in Afghanistan and will not suffer from confirmation bias; instead, it will highlight data that both supports and refutes contemporary hypotheses. In seeking to understand why recent attacks occurred in Afghanistan, it can plot them in a sophisticated manner to preclude saliency bias or confabulation errors. Simply put, artificial intelligence can give decisionmakers a lot of tools to prevent them from “suppress(ing) alternative stories” or falsely producing “a single coherent interpretation of what is going on around us,” as Daniel Kahneman reminds us.

If artificial intelligence can help policymakers see patterns that they were unable or unwilling to grasp, then it will be particularly valuable.

But of course, that would require willingness on the part of senior national security decisionmakers to use such tools accordingly. To be sure, “data analytics and algorithms are developed by and for human consumption and can only be as useful as humans make them.” And, as any experienced national security policymaker knows well, narratives develop around how to understand various dilemmas and breaking through them can be exceedingly difficult. If artificial intelligence can help policymakers see patterns that they were unable or unwilling to grasp, then it will be particularly valuable. To take one recent (and radioactive) example: how U.S. policymakers assessed the situation in Syria in 2012. Among many senior national policymakers, there was a widespread belief that Bashar al-Assad was severely losing the conflict. This assessment was grounded in a number of factors, including anchoring to Libya as an analogy; miscalculating the stakes for Assad; overestimating the Syrian security forces who had defected and the broader opposition’s views on using force; and misunderstanding the stakes for regional powers and proxies. Artificial intelligence could have highlighted patterns that would have called many of these assessments into question. Uncertainty, however, is a feature of the system and cannot be technologically vanquished. Even with artificial intelligence and data analytic tools, U.S. policymakers will nevertheless have to make decisions about dynamic conflict based on incomplete information.

Going forward

A few months before the September 11, 2001 attacks, Secretary of Defense Donald Rumsfeld sent President George Bush a memo, “Predicting the Future” that was informing his thinking as the Defense Department pulled together the Quadrennial Defense Review, or national defense strategy. In it, he shows how throughout history, national security policymakers could completely misread the international landscape and subsequently make abysmal decisions based on the information they had at the time. The capabilities afforded by artificial intelligence may minimize the regret factor, but will not eliminate it—and will invariably make accountability even fuzzier. Indeed, Rumsfeld’s memo ends with the following sober assessment: “I’m not sure what 2010 will look like, but I’m sure that it will be very little like we expect, so we should plan accordingly.” Similarly, as scholar Thomas Rid reminds us, “The futurists, of course, didn’t always get the future wrong, but almost always they got the speed, the scale, and the shape wrong. They continue to do so.”

By helping to question assumptions … these tools [AI provides] can facilitate policymakers’ rigorous interrogation of thorny problems.

National security policymakers can benefit from the tools artificial intelligence provides to stress test their diagnoses, decisions, and assessments about strategic-level dilemmas. They can both widen their aperture of national security challenges and also focus on their comparative advantage: being able to “think, ask questions, and make judgements.” By helping to question assumptions—to return to the Syria example: what is the pattern and trajectory of violence, how and in what ways is support for key leaders shifting, and how do regional actors value their role—these tools can facilitate policymakers’ rigorous interrogation of thorny problems.

Perhaps artificial intelligence will prove Marcus Aurelius wrong. “Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present,” he counseled. Perhaps tomorrow’s national security policymakers will have new “weapons of reason,” but how they choose to employ them will make all the difference indeed.

  • Footnotes
    1. National security strategy requirements as listed in Goldwater-Nichols Department of Defense Reorganization Act of 1986 (Public Law 102-496, Stat. 3190, U.S.C. 50).
    2. Nathan Leys, “Autonomous Weapon Systems and International Crises”, Social Science Quarterly, Vol 12, No 1, Spring N2018.
    3. In one telling example, scholars Kareem Ayoub and Kenneth Payne argue, “An intelligent AI might not, for example, make the mistake that General Westmoreland and others made in Vietnam, examine the underlying rationales offered for US involvement in Vietnam – including as a way of signaling wider credibility, safeguarding existing commitments, and opposing expansionist world Communism.”