Over the past several years, the United States and several of its leading allies have expressed a commitment to the responsible development and use of artificial intelligence (AI) for national security. Most notably, the U.S. Department of Defense adopted five principles for the safe and ethical application of AI in 2020, and the U.S. Defense Innovation Unit published a set of Responsible AI Guidelines in fall 2021. Meanwhile, NATO recently released its strategy for the responsible development and use of AI, and the United Kingdom’s Ministry of Defense is actively developing ethical principles of its own.
On January 31, Brookings hosted a virtual event to compare and discuss how the United States and its allies are addressing ethical considerations in their pursuit and integration of military applications of AI-enabled technologies. What are the processes used by countries and international organizations to define applicable AI principles? What lessons have been learned from implementing those principles into practice?
Technical Director, Artificial Intelligence/Machine Learning - US Defense Innovation Unit
Deputy Head, Defence AI and Autonomy Unit - UK Ministry of Defence
Head of Innovation and Data Science - NCI Agency, NATO
To subscribe or manage your subscriptions to our top event topic lists, please visit our event topics page.