Sections

Research

Glossary of artificial intelligence terms

A registered user sits in front of a screen while the language program ChatGPT, developed by the US company OpenAI, writes computer code based on artificial intelligence.
A registered user sits in front of a screen while the language program ChatGPT, developed by the US company OpenAI, writes computer code based on artificial intelligence. (Frank Rumpenhorst/dpa via Reuters Connect)

Since 2019, the Brookings Institution and the Center for International Security and Strategy at Tsinghua University (Tsinghua CISS) have convened teams of national security technology experts from the United States and China for an unofficial Track-II dialogue on artificial intelligence (AI) in national security. The two teams identified a need to build parallel glossaries of AI terms—one developed by U.S. experts and the other developed by Chinese experts—to enable a precise understanding of each other’s intended meanings when discussing AI and national security. 

What follows is a glossary of terms that play a meaningful role in the application of AI to matters of international security, stability, and military affairs. The definitions for each term were compiled and reviewed by the U.S. experts involved in the Brookings-Tsinghua CISS dialogue. Where possible and appropriate in the view of the scholars, the definitions draw from government publications—however, in no case are definitions offered as official or authoritative. As technology, state doctrine, and practice evolve, these definitions will require further elaboration, and official policy will supersede these interim definitions. This glossary—and the parallel Chinese companion glossary—are released in the spirit of attempting, on the basis of mutual interest, to provide greater clarity on this critical issue. As this dialogue advances, both sides will aim to add relevant terms to the glossary.

The glossary developed by Chinese experts can be found here.

  • Weapons (Systems)

    Weapons (armament, ammunition, equipment, devices)

    • Ammunition/munition: “A complete device charged with explosives, propellants, pyrotechnics, initiating composition, or nuclear, biological, or chemical material for use in military operations, including demolitions. Certain suitably modified munitions can be used for training, ceremonial, or nonoperational purposes.”
    • Equipment: “In logistics, all nonexpendable items needed to outfit or equip an individual or organization.”

    Weapons systems

    Weapons systems are “[a] combination of one or more weapons with all related equipment, materials, services, personnel, and means of delivery and deployment (if applicable) required for self-sufficiency.”

    Lethal/Non-lethal weapons

    • Lethal weapons (systems): Lethal weapons are those designed to create explosive or kinetic or other effects with the express purpose of causing definitive and permanent damage and, quite likely, human fatalities.
    • Non-lethal weapons (systems): Non-lethal weapons (NLW) are, by contrast, “explicitly designed and primarily employed to incapacitate targeted personnel or materiel immediately, while minimizing fatalities, permanent injury to personnel, and undesired damage to property in the target area or environment. … NLW are intended to have reversible effects on personnel and materiel.” It should be noted that not all non-lethal weapons are truly safe; some can cause lasting harm or even death inadvertently or accidentally or if used incorrectly.
  • Intelligent/Autonomous Platforms (Systems)

    Robots

    “A powered physical system designed to be able to control its sensing and action for the purpose of accomplishing assigned tasks in the physical environment.”

    Intelligent weapons

    Intelligence in Unmanned Systems is defined as the “[p]ossession of and the ability to exercise [contextual autonomous capability] CAC in the [unmanned system] UMS.”

    Autonomous weapons (systems)

    “A weapon system that, once activated, can select and engage targets without further intervention by an operator. This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override operation of the weapon system, but can select and engage targets without further operator input after activation.”

    Intelligent cluster/swarm systems

    • Robotic swarm system: “[M]ultirobot systems within which robots coordinate their actions to work collectively towards the execution of a goal.”
    • Intelligent robotic swarm: “[A] network of uninhabited vehicles that autonomously coordinate their actions to accomplish a task under some degree of mission-level human direction.”

    Unmanned systems

    An “electro-mechanical system [platform], with no human operator aboard, that is able to exert its power to perform designed missions. May be mobile or stationary. Includes categories of unmanned ground vehicles (UGV), unmanned aerial vehicles (UAV), unmanned underwater vehicles (UUV), unmanned surface vehicles (USV), unattended munitions (UM), and unattended ground sensors (UGS). Missiles, rockets, and their submunitions, and artillery are not considered unmanned systems.”

    Unmanned (combat) platform

    “An air, land, surface, subsurface, or space platform that does not have the human operator physically onboard the platform.”

    Unmanned aerial platforms (drones, unmanned aerial vehicles (UAV), remotely piloted aircraft systems (RPAS))

    • Drone: “A land, sea, or air vehicle that is remotely or automatically controlled.”
    • Unmanned aircraft: An aircraft or balloon that does not carry a human operator and is capable of flight under remote control or autonomous programming.
    • Unmanned aircraft system: “That system whose components include the necessary equipment, network, and personnel to control an unmanned aircraft.”
    • Guided missile: An unmanned vehicle moving above the surface of the Earth whose trajectory or flight path is capable of being altered by an external or internal mechanism.”

    Unmanned ground platforms (self-driving vehicles, unmanned vehicles)

    • Remotely operated platform: “An air, land, surface, subsurface, or space platform that is actively controlled by an operator who is not physically on the platform.”

    Unmanned surface platforms (unmanned surface vessels)

    Unmanned surface vessel: a seacraft that operates on the surface of water and does not carry a human operator and is capable of operating under remote control or autonomous programming (composed from the above definitions).

    Unmanned underwater vehicle

    “Any vehicles that are able to operate underwater without a human occupant.”

  • Intelligent/Autonomous Technologies

    Artificial intelligence

    “The ability of machines to perform tasks that normally require human intelligence – for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action – whether digitally or as the smart software behind autonomous physical systems.”

    Machine learning

    “The study or the application of computer algorithms that improve automatically through experience. Machine learning algorithms build a model based on training data in order to perform a specific task, like aiding in prediction or decision-making processes, without necessarily being explicitly programmed to do so.”

    Deep learning

    “A machine learning implementation technique that exploits large quantities of data, or feedback from interactions with a simulation or the environment, as training sets for a network with multiple hidden layers, called a deep neural network, often employing an iterative optimization technique called gradient descent, to tune large numbers of parameters that describe weights given to connections among units.”

    Neural networks

    “A deep learning architecture that is trained on data or feedback, generating outputs, calculating errors, and adjusting its internal parameters. The process is repeated possibly hundreds of thousands of times until the network achieves an acceptable level of performance. It has proved to be an effective technique for image classification, object detection, speech recognition, some kinds of game-playing, and natural language processing––problems that challenged researchers for decades. By learning from data, [deep neural networks] DNNs can solve some problems much more effectively and also solve problems that were never solvable before.”

    Autonomous control (semi-autonomous, fully autonomous)

    • Autonomy: “Autonomy refers to a system’s ability to accomplish goals independently, or with minimal supervision from human operators in environments that are complex and unpredictable.”
    • Automation: Very similar to autonomy, automation enables a system to perform tasks with little or no human supervision. Automation generally refers to tasks that are narrower or more constrained than autonomy tasks but that may contribute to overall system autonomy. For example, an autonomous airborne vehicle may be capable of navigating in complex physical and military environments, while subsystems in the vehicle automate the tasks of navigating along a route, avoiding obstacles, and maintaining the stability of the vehicle.

    AI lifecycle

    “The steps for managing the lifespan of an AI system: 1) Specify the system’s objective. 2) Build a model. 3) Test the AI system. 4) Deploy and maintain the AI system. 5) Engage in a feedback loop with continuous training and updates.”xxiv “Note that for data-driven AI systems, step (2) is expanded and replaced with 2.a) Acquire data to meet the objective, and 2.b) Train the AI system on the data.”

  • Action and Control

    Chain of command

    “The succession of commanding officers from a superior to a subordinate through which command is exercised. Also called command channel.”

    Command and Control (System)

    • Joint doctrine defines command as the authority a military commander lawfully exercises over subordinates to assign missions. It goes on to describe command as “the art of motivating and directing people and organizations into action to accomplish missions.” Control is the commander’s direction to his forces; it is a form of communication that conveys decisions and intent. Joint Publication 1 says, “To control is to manage and direct forces and functions consistent with a commander’s command authority.” The authority to control is inherent in command, but command is not always inherent in control. Often the personnel or systems that execute control are acting on the commander’s behalf, implementing the commander’s authority, but do not hold that authority themselves. Simply put, command is the authority to tell someone what (or what not) to do, and control is the act of telling someone what (or what not) to do.
    • Finally, the function of command and control (C2) is dependent on communication. Control requires the ability to communicate. Relatedly, in many cases, C2 is also dependent on computers. Because of this dependency, some have taken to changing the acronym “C2” to “C3” (command, control, and communication) or “C4” (command, control, communication, and computers).
    • A C2 system is any system (or system of systems) that is designed to allow and improve how C2 is exercised.

    Decision point

    “A point in space and time when the commander or staff anticipates making a key decision concerning a specific course of action.”

    Meaningful human control

    “Meaningful human control has three essential components:

    1. Human operators are making informed, conscious decisions about the use of weapons.
    2. Human operators have sufficient information to ensure the lawfulness of the action they are taking, given what they know about the target, the weapon, and the context for action.
    3. The weapon is designed and tested, and human operators are properly trained, to ensure effective control over the use of the weapon.”
  • Human-Machine Relationship

    Human-machine interaction

    “The activity by which human operators engage with [unmanned systems] UMSs to achieve the mission goals.”

    Trustworthy AI (mutual understanding, mutual compliance, mutual trust)

    “Trust is established by ensuring that AI systems are cognizant of and are built to align with core values in society, and in ways which minimize harms to individuals, groups, communities, and societies at large. Defining trustworthiness in meaningful, actionable, and testable ways remains a work in progress. In part, we rely on the practice of trustworthy computing as adopted by some in computer science and system engineering fields—‘trustworthiness of a computer system such that reliance can be justifiably placed on the service it delivers (IEEE)’; ‘of an item, ability to perform as and when required.’ On the other hand, the AI user trust decision, as other human trust decisions, is a psychological process. There is currently no method to measure user trust in AI or measure what factors influence the users’ trust decisions.”

    • Transparency: “reflects the extent to which information is available to individuals about an AI system, if they are interacting – or even aware that they are interacting – with such a system. Its scope spans from design decisions and training data to model training, the structure of the model, its intended use case, and how and when deployment or end user decisions were made and by whom.”
    • Explainability: “A characteristic of an AI system in which there is provision of accompanying evidence or reasons for system output in a manner that is meaningful or understandable to individual users (as well as to developers and auditors) and reflects the system’s process for generating the output (e.g., what alternatives were considered, but not proposed, and why not).”
    • Traceable/reliability: DOD has established a set of five ethical principles for the use of AI that “apply to all DoD Al capabilities, of any scale, including AI-enabled autonomous systems, for warfighting and business applications.” These include:
      • Traceable: “The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.”
      • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across Al capabilities’ entire life-cycle.
    • Human judgment: DOD Directive 3000.09 requires that autonomous weapons be designed to “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

    Chain of Decision (target identification, target confirmation, action authorization, decision confirmation, task termination, task reset)

    • Target development: “[T]he systematic examination of potential target systems—and their components, individual targets, and even elements of targets—to determine the necessary type and duration of the action that must be exerted on each target to create an effect that is consistent with the commander’s specific objectives.”
    • Target acquisition: The detection, identification, and location of a target in sufficient detail to permit the effective employment of weapons.
    • Positive identification: “an identification derived from observation and analysis of target characteristics including visual recognition, electronic support systems, non-cooperative target recognition techniques, identification friend or foe systems, or other physics-based identification techniques.”
    • Target validation: “Validation during [the] execution [phase of the joint targeting process] includes analysis of the situation to determine if planned targets still contribute to objectives (including changes to plans and objectives), if targets are accurately located, and how planned actions will impact other friendly operations.”xliii “Additionally, validation reviews the target’s compliance with law of war and [rules of engagement] ROE and ensures that it is not otherwise restricted.”