Sections

Commentary

It’s time to start thinking about governance of autonomous weapons

A robot is pictured as activists from the Campaign to Stop Killer Robots, a coalition of non-governmental organisations opposing lethal autonomous weapons or so-called 'killer robots', stage a protest at Brandenburg Gate in Berlin, Germany, March, 21, 2019. REUTERS/Annegret Hilse - RC1287E208A0

Emerging digital technologies open new possibilities to transform the way nations engage in war. In 2018, the U.S. Department of Defense signaled its commitment to artificial intelligence (AI) with the establishment of the Joint Artificial Intelligence Center (JAIC). It is no surprise that the U.S. is encouraging an AI-focused defense initiative: the technology promises to eliminate dangerous tasks, create more targeted and effective defense systems, and improve operational efficiency in organizations.

From machine learning intelligence insights to autonomous drones, AI has the potential to significantly improve the goals and tasks of U.S. military operations at almost all levels. The U.S. military is focused on more than just operational gains though—it is keeping pace with growing state investment in AI from major powers such as China and Russia. This shifting landscape reveals a pressing need to develop appropriate governance structures that enhance the benefits and manage the risks of military uses of AI.

Chinese and Russian Investment in AI

In 2017, the State Council in China announced that they plan to be the world leaders in AI by 2030. That same year, Chinese venture capital firms spent $4.9 billion on AI companies, surpassing U.S. firms’ $4.4 billion total. Over the coming decade, the largest state-owned venture capital firm in China is expected to invest more than $30 billion in order to reach its 2030 leadership goal. Similarly, Russia is making significant investments in this area, pouring nearly $719 million into AI research and development in by 2021. As Vladimir Putin stated in 2017, “[AI] is the future, not only for Russia, but for all of humankind.” With this speech, Putin committed Russia to a state-led initiative to develop AI competencies and compete for international superiority.

Even with new Chinese and Russian investments, the U.S. maintains an advantage in the research and development of AI. Recent U.S. investment in AI has been largely driven by the private sector, but with the growing national interest in the emerging technology, the government is positioning itself to play a key role. In 2019, the Trump administration issued an executive order that declares a national interest in “maintaining American leadership in artificial intelligence”, citing the economic, social, and national security benefits of AI. By maintaining technical excellence, the U.S. can use AI as a tool to advance national interests and ideals at a global scale. This executive order, paired with the creation of JAIC and its $208 million in funding, show an effort to challenge Chinese and Russian initiatives to claim superiority in AI.

Governance of autonomous weapons

Although many activists have called for the outright ban of autonomous lethal weapons, a total ban would be difficult to enforce. The integration of AI as a core part of national defense strategies in the U.S., China, Russia, and other nations will also be hard to reverse. Given these circumstances, it is important to consider what meaningful international and domestic governance of autonomous weapons looks like. There are five main areas of focus where regulations could help us capture the benefits of AI in defense and protect against potential harms:

  • Transparency in technology performance to allow for proper oversight and accountability:
    With more robust performance measures around testing, defense agencies could offer a meaningful degree of transparency prior to deploying a technology.
  • Maintaining human-in-the-loop systems that permit human decision making:
    Autonomous systems are subject to failure. Requiring systems that allow for human intervention would preserve human agency with new technology in the case of failures, unexpected actions, or other needs.
  • Hiring and training appropriate personnel to ensure worker competencies:
    Humans engaging with emerging military technology must maintain technical literacy by updating their knowledge relevant to their post.
  • Regulations on technology transfer to prevent malicious use of advanced technology:
    How and with whom we trade weapons becomes increasingly important given the sophistication of emerging technologies. We must consider who should control data, algorithms, and hardware during technology transfer.
  • Strong prioritization of cybersecurity to protect digital intelligence, infrastructure, and services:
    The more we digitize our military systems, the more we expose to cyber vulnerabilities. It is important that the defense agencies continuously maintain the highest established standards for cybersecurity.

These principles for AI deployment in defense could allow us to not only build better accountability and oversight, but also better capture the benefits of AI while reducing unintended harms. Then we could begin asking important outcome-based questions, such as: Does this technology actually mitigate collateral damage? Are we building better national security practices, even with increased potential cyber vulnerabilities? Are humans effectively able to maintain control to make important defense decisions? Through transparency and meaningful discussions, there can be more assurance for the positive impacts of AI in defense.

Amritha Jayanti contributed to this blog post.

Authors