BPEA Spring 2024 conference

LIVE

BPEA Spring 2024 conference
Sections

Research

Artificial intelligence in war: Human judgment as an organizational strength and a strategic liability

Personnel work at the Air Force Space Command Network Operations & Security Center at Peterson Air Force Base in Colorado Springs, Colorado July 20, 2010. U.S. national security planners are proposing that the 21st century's critical infrastructure -- power grids, communications, water utilities, financial networks -- be similarly shielded from cyber marauders and other foes. The ramparts would be virtual, their perimeters policed by the Pentagon and backed by digital weapons capable of circling the globe in milliseconds to knock out targets.  To match Special Report  USA-CYBERWAR/          REUTERS/Rick Wilking (UNITED STATES - Tags: MILITARY SCI TECH POLITICS) - GM1E6A51SA301

EXECUTIVE SUMMARY

Artificial intelligence has the potential to change the conduct of war. Recent excitement about AI is driven by advances in the ability to infer predictions from data. Yet this does not necessarily mean that machines can replace human decisionmakers. The effectiveness of AI depends not only on the sophistication of the technology but also on the ways in which organizations use it for particular tasks. In cases where decision problems are well-defined and plentiful relevant data is available, it may indeed be possible for machines to replace humans. In the military context, however, such situations are rare. Military problems tend to be more ambiguous while reliable data is sparse. Therefore, we expect AI to enhance the need for military personnel to determine which data to collect, which predictions to make, and which decisions to take.

The complementarity of machine prediction and human judgment has important implications for military organizations and strategy. If AI systems will depend heavily on human values and interpretations, then even junior personnel will need to be able to make sense of political considerations and the local context to guide AI in dynamic operational situations. Yet this in turn will generate incentives for adversaries to counter or undermine the human competencies that underwrite AI-enabled military advantages. If AI becomes good at predicting the solution to a given problem, for instance, a savvy adversary will attempt to change the problem. As such, AI-enabled conflicts have the potential to drag on with ambiguous results, embroiled in controversy and plagued by crises of legitimacy. For all of these reasons, we expect that greater reliance on AI for military power will make the human element in war even more important, not less.

Authors