Updated January 5, 2024

Policy Background and Rationale

Artificial intelligence (AI) has the potential to aid Brookings staff across a variety of research and operational tasks. With generative AI applications now able to produce human-quality text, imagery, and code quickly and cheaply, they can help Brookings researchers with everything from brainstorming new ideas and refining arguments to tightening prose and suggesting titles. Likewise, they may also be used by communications staff for drafting social media posts and generating header images or by development staff for lead generation and drafting outreach, among other functions. At its best, AI has the promise to increase Brookings’s research productivity and operational efficiency.

Yet AI also carries significant risks. Large language models (LLMs) are notoriously prone to “hallucinate” by referencing inaccurate facts or citing non-existent research. In addition, many generative AI systems by default include user inputs when they train subsequent versions of their models, which poses significant risks with respect to sensitive data and personally identifiable information (PII). Generative AI systems may also introduce legal risks with respect to copyright law, since many are trained on underlying datasets whose copyright licenses are unclear. They also present ethical concerns when they are used to present images of real persons, individuals, or historical events. Finally, AI applications are known to reproduce biases in their underlying training data and thus may introduce ethical and legal risks whenever used in Brookings research and operations, especially functions such as hiring, promotions, and compensation.

In short, AI tools offer exciting new ways for Brookings to carry out its work and mission, but they must be used safely, responsibly, and in compliance with existing copyright and privacy law—a challenging task given that AI has outpaced legal frameworks.

This document offers provisional guidance for how Brookings employees and affiliates can use AI applications while managing risks responsibly. We will continue to evaluate and adapt the guidance as these tools, and their legal and regulatory frameworks, evolve. The guidance is intended to cover all Brookings research products, including those created by non-employee affiliates, such as nonresident fellows. It is also meant to cover all operational work products and activities at Brookings, including the use of AI for communications, development, finance, facilities, and HR tasks. Though generative AI systems are currently top of mind, the guidelines also extend to other uses of AI, such as applications for facial recognition or image classification. This guidance was developed by an internal advisory group composed of staff members representing every program, business unit, and job level within the Institution and who will be a leading resource for monitoring the adequacy of the principles and efficacy of their implementation.

Provisional Principles

As Brookings employees and affiliates use AI applications, they must uphold the following principles:

  1. Comply with existing policies. Brookings already has a wide range of policies and guidelines for its research and operations, including the Quality Review Policy, Plagiarism and Research Misconduct Policy, and Acceptable Use of Technology Resources Policy. The use of any AI application must comply with any relevant policy or guidelines already in place. Brookings staff must consult the appropriate office for guidance on best practices and policies for using AI features on institution-wide platforms.
  2. Review and validate outputs. AI models may output false, misleading, plagiarized, or biased content and information. Individuals and supervisors must always review and validate the output of any AI model or have it reviewed by colleagues with relevant expertise before including it in a Brookings work product. Individuals who use AI tools to help with a work product will still be held accountable for the quality and integrity of that product.
  3. Protect sensitive data and information. AI systems often train new models on past user input and data. Individuals must not input any sensitive data or personally identifiable information into a generative AI tool unless they have confirmed that user inputs will be properly protected and that the work product is not considered high-risk. Until further guidance is issued, Brookings staff must consult with their supervisors about what data should be considered sensitive and what work products should be considered high-risk.
  4. Disclose appropriately. For internal work products, individuals must always disclose to their supervisor, reviewer, or editor if a work product has been created with the assistance of generative AI and whether it has been properly reviewed. For externally facing work products, such as published research or event announcements, whether a public disclosure is warranted depends on the medium in question:
    1. Imagery. Published images created by a generative AI tool require public disclosure in line with Brookings’s existing image attribution requirements under the Institution’s website image guidelines. (In addition, Brookings communications staff must confirm that the generative AI tool used to create the image has full copyright over the images it was trained on or has verified that the tool was trained using images in the public domain.)
    2. Text. Published research or text created with the assistance of generative AI does not require public disclosure unless an AI tool’s output is included verbatim at length in a Brookings publication, in which case the tool would need to be cited in line with Brookings’s quality review and plagiarism policies.