Sections

Commentary

NIST’s AI Risk Management Framework plants a flag in the AI debate

19 February 2020, Baden-Wuerttemberg, Stuttgart: A high-performance computer with which neural networks are calculated is located in the High Performance Computing Centre Stuttgart (HLRS). Photo: Sebastian Gollnow/dpa

The National Institute of Standards & Technology (NIST) issued Version 1.0 of its Artificial Intelligence Risk Management Framework (AI RMF) on January 26, 2023 – a multi-tool for organizations to design and manage trustworthy and responsible artificial intelligence (AI). As the United States and other leaders develop policies to address the possibilities and problems of AI, this framework adds coherence to evolving U.S. policy on AI and contributes to ongoing international debate about AI policy and development.

First, a word about NIST, and a personal acknowledgment. NIST is part of the Department of Commerce and, as a former leader there, I am enthusiastic about its work. The agency has its origin in the Constitution’s conferral on the federal government of the power “to fix the Standard of Weights and Measures,” and in the establishment of a federal Superintendent of Weights and Measures in 1836, followed by NIST’s lineal predecessor the Bureau of Standards in 1901. As the role of science and technology in the economy and society has grown, so has NIST’s role at the intersection of government, science and technology, and commerce. It conducts measurement science, enables standards, and operates advanced laboratories. Agency scientists have included five Nobel prize winners in fundamental disciplines like quantum physics and laser cooling. The NIST AI RMF is rooted in the agency’s culture of precise measurement that can be replicated for practical application.

During my tenure at Commerce, I found it has something valuable to contribute to a broad range of issues, and its culture and science produce sound, research-based, and useful public goods. I consider NIST a shining example of what government can do at its best.

What the NIST AI RMF does

Development of the AI RMF was called for by the National Artificial Intelligence Initiative Act, part of the 2020 national defense authorization. The AI RMF follows the template of previous information risk management and governance frameworks from NIST, the Cybersecurity Framework released in 2014 and a Privacy Framework released in 2020. Like these, it is the product of a highly consultative and iterative process, with two drafts released for public comment, multiple workshops, and other forms of public engagement. Like them, the end-product is intended to be “a living document” that is “voluntary, rights-preserving, non-sector-specific, use-case agnostic,” and adaptable to all types and sizes of organizations. The AI RMF also follows these earlier frameworks in organizing implementation into “core functions,” subcategories, and implementation profiles.

AI, as a general-purpose technology, spans a wide range of technologies, data sources and applications. AI’s breadth makes it “uniquely challenging” for information technology risk management. The AI RMF thus introduces “socio-technical” dimensions to its risk management approach, yielding a wide aperture that encompasses “societal dynamics and human behavior” across a wide range of outcomes, actors, and stakeholders and actors to consider “People and Planet” (page 9).

Artificial intelligence has provoked wide discussions of AI risks and benefits, concerns about bias in AI training data and outputs, and questions as to what constitutes reliable and trustworthy AI as well as ideas for how to address these. The AI RMF provides two lenses through which to consider such questions. First, it provides a conceptual roadmap for identifying risk in the AI context – outlining general types and sources of risk relating to AI, and enumerating seven key characteristics of trustworthy AI (safe, secure and resilient, explainable and interpretable, privacy-enhanced, fair—with harmful bias managed, accountable and transparent, valid and reliable).

Second, it offers a set of organizational processes and activities to assess and manage risk linking AI’s socio-technical dimensions to stages in the lifecycle of an AI system and to the actors involved. Key steps for these processes and activities are “test, evaluation, verification, and validation (TEVV).” The processes and activities are broken down into core functions—to govern, map, measure, and manage—further breaking down each of these into subcategories with ways to carry out these functions. The AI RMF does not break these down even further with references and implementation tiers and profiles to guide implementation more specifically, as the previous framework did.

Instead, with the release of the AI RMF, NIST is also launching a “playbook,” a GitHub-hosted tool that will provide additional suggestions for actions, references, and documentation for the “govern, map, measure, and manage” functions and subcategories. Mapping core functions to international standards has been a key feature of previous risk management frameworks but, reflecting the early stage of AI standards, the AI RMF includes only a few references to standards from the ISO/IEC international standards body as well guidelines from the Organization for Economic Development (OECD). There are additional references in “crosswalks” included in resource materials to ISO/IEC standards as well as the proposed EU AI Act and U.S. executive order on trustworthy AI and OSTP AI Bill of Rights. This is likely to change as AI standards evolve.

As the title “Version 1.0” implies, the document released January 26 is not meant to be NIST’s last word on AI risk management. The agency expects to conduct a full, formal review by 2028, which could produce a Version 2.0. But in the meantime, consistent with its billing as a “living document,” NIST will take comments on the playbook on a continuing basis and will review and integrate these semi-annually, potentially issuing Versions 1.1-n (as it did with the Cybersecurity Framework in 2018, with Version 2.0 in progress).

This iterative approach can help the AI RMF adapt to changes in both AI technology and understanding of the issues it presents. There is a lot more to learn about the characteristics of trustworthy AI identified in the framework document. In effect, the core functions and their more specific subcategories operate like clues to a treasure hunt: they describe steps on a path to trustworthy AI, but it is up to the organizations that apply the AI RMF to piece together their path from these clues. In time, the playbook may supply a more definite map to a destination.

The Potential Impact of the AI RMF

NIST’s prior approach to the Cybersecurity Framework may build on the successful deployment and adoption of a proven risk management model. A key goal of the cybersecurity framework was to spur and shape the development of standards and practices in the cybersecurity field. Its broad adoption has helped drive toward this goal. The cybersecurity framework has been applied by a large majority of U.S. companies and seen notable adoption outside the U.S., including by the Bank of England, Nippon Telephone & Telegraph, Siemens, Saudi Aramco, and Ernst & Young. The federal government mandates its use by federal agencies, and 20 states have done likewise. Various federal agencies (most notably the Securities and Exchange Commission) use the cybersecurity framework as a benchmark for sound cybersecurity practices in regulated industries.

The influence of the Cybersecurity Framework has not been confined to the U.S. It has been translated into 15 languages and several countries have implemented it or used it as a model for similar frameworks: Italy incorporated it into their cybersecurity strategy, the UK’s cybersecurity framework incorporates the same core functions, Uruguay based its own cybersecurity framework on NIST’s and has applied it throughout government agencies, and Switzerland’s Federal national economic supply office used the framework to work with private sector organizations in critical supply sectors to carry out the national cybersecurity strategy. Canada’s Investment Industry Regulatory Organization lists it as a “foundational reference” for dealers’ best practices. The Organization for American States recommends the NIST Cybersecurity Framework as “adapt[ing] perfectly to different sectors and countries,” and the global IT professional association ISACA incorporated it into its enterprise management and governance framework.

Indeed, prior to the development of the NIST Cybersecurity Framework, I thought broad federal legislation to mandate cybersecurity was premature. That was also the prevailing policy from Congress and the executive branch, but widespread adoption of numerous cybersecurity standards and best practices since then provides a consensus foundation for legislation—indeed, the proposed American Data Privacy and Protection Act would require certain basic cybersecurity management practices. The NIST Cybersecurity Framework helped build the necessary consensus for such measures.

The 2020 NIST Privacy Framework, on the other hand, has not had the same kind of impact as the Cybersecurity Framework.  When it was released, both the EU’s General Data Protection Regulation and the California Consumer Privacy Act had already gone into effect, triggering extensive privacy design and compliance programs among many American companies. This well-developed landscape has limited space for the Privacy Framework to affect privacy and data protection standards, practices, and processes.

Like its cybersecurity predecessor and unlike the privacy counterpart, the AI RMF has an early mover advantage in a landscape that is still developing. The AI RMF could achieve similar uptake and influence on understanding of how to ensure trustworthy AI in practice.  Both the EU and Canada are in the process of legislating on AI, but adoption and entry into force is yet to come. The Council of Europe has developed a similar risk management framework for human rights impact of AI through the Alan Turing Institute, Singapore has developed a voluntary testing framework for trustworthy AI, and the OECD has a working group building a toolkit on trustworthy AI.  All these efforts resonate with NIST’s framework and vice-versa, which helps the AI RMF to be relevant not only to organizations looking to manage AI systems in the U.S. but also to others around the world.

The release of the AI RMF follows the issuance in October, 2022 of the Blueprint for an AI Bill of Rights (AIBOR), a set of principles to protect individuals from injury and discrimination or loss of privacy and agency, with a “technical companion” that identifies specific ways AI systems can affect these principles and general steps to prevent adverse effects. The White House fact sheet accompanying the AIBOR catalogues activities by a variety of federal agencies to develop guidance on algorithmic discrimination and surveillance and lay groundwork for potential enforcement actions.  These encompass the Departments of Justice, Labor, Education, Health and Human Services, Veterans Affairs, and Housing and Urban Development, along with the Equal Employment Opportunity Commission, the Federal Trade Commission, the Consumer Finance Protection Board. The NIST AI RMF provides a vehicle to implement principles of the AIBOR within a variety of organizations, including those in industries whose use of AI federal agencies are examining.

Neither the AI RMF nor the AIBOR is legally binding, as the artificial intelligence legislation being considered in the European Union would be. But this could be considered a feature, not a drawback. It is part and parcel of what allows the AI RMF to be applied by “organizations of all sizes and in all sectors and throughout society” without the risk of being over-inclusive, facilitating its adoption. Similarly, the framework is easier than binding law to develop iteratively in both its versions and its application by organizations. It can be scaled to the organization, the use case, and the risk. Ultimately, the AI RMF relies on soft power to achieve adoption and impact.

While the flexible approach does not ensure adoption, it avoids some of the challenges the much more ambitious EU AI Act faces in the EU’s legislative process. Indeed, that regulation began even more ambitiously, with President von der Leyen’s opening declaration that the European Commission should propose legislation on ethics in its first 100 days. As the Commission set out to frame legislation and gathered input, its focus narrowed toward specific use cases, leading to its “risk-based” proposal to regulate AI systems deemed “high-risk” because of their impact on rights of individuals or on safety. While this is sometimes described as “horizontal,” it is less so than NIST’s AI RMF or the AIBOR. Indeed, the Commission estimated in its proposal that only 5-15 percent of AI systems would be subject to the regulation.

Under the EU regulation as proposed, AI systems that fall into a high-risk category would be subject to the full menu for conformity assessment. Hence, issues that affect the reach of this category—the scope of the definition of AI and the treatment of general-purpose AI where the risk depends on how the AI is applied—have become sticking points in the EU legislative debate. A legally binding framework presents hard choices—precision in language, the risk of being over-inclusive or under-inclusive, and possible unintended consequences—and does not easily lend itself to bespoke solutions.

Societies and governments are just beginning the process of understanding AI. We have a lot to learn, and we are in a design-build project with respect to AI policy and development, designing the edifice at the same as construction is under way. The AI RMF, like its framework predecessors, is process-focused. But the mapping, measuring, management, and governance outlined in the AI RMF will, at a minimum, inform the design of the edifice for organizations, for society, and for governments.

Authors