On Tuesday, May 16, the U.S. Senate held a hearing on regulating AI, with a focus on ChatGPT. Whether and how the U.S. regulates ChatGPT—and AI more generally—will help set the tone globally for AI regulation and how to address AI risks without stifling innovation. Sam Altman (CEO of OpenAI, which makes ChatGPT4) for instance emphasized the importance of international cooperation on issues such as AI licensing and auditing. However, more is needed in the U.S. when it comes to AI regulation, which is a precursor to more effective U.S. leadership on international AI governance.
International cooperation on AI Governance
There is already a range of international forums where cooperation on international AI governance is being discussed. This includes the US-EU Trade and Technology Council (TTC), the Global Partnership in AI (GPAI), the Organisation for Economic Co-operation and Development (OECD), as well as work we are doing in the Brookings/CEPS Forum on Cooperation in AI (FCAI). The recent G-7 Leaders Communiqué also underscored the need for cooperation on AI, including on the impact of LLMs such as ChatGPT. Yet, the capacity of the U.S. to lead internationally on AI governance is hampered by the absence of a comprehensive approach to domestic AI regulation. The U.S. is making progress in developing domestic AI regulation, including with the National Institute of Standards and Technology (NIST) AI Risk Management Framework, the Blueprint for an AI Bill of Rights, and existing laws and regulations that apply to AI systems (such as scrutiny by the Federal Trade Commission). However, more is needed. The absence of a more comprehensive approach means that the U.S. is unable to present a model for how to move forward globally with AI governance, and instead is often left responding to other countries’ approaches to AI regulation, the EU AI Act being the case in point.
Large Language Models such as ChatGPT4
When asked a question, a large language model (LLM) is able to generate sophisticated responses that are increasingly indistinguishable from what a human might answer. ChatGPT is, in essence, a large-scale probability machine that predicts what word should follow next based on the data it has been trained on. ChatGPT4 is intelligent but is not conscious or sentient. That said, ChatGPT and LLMs are immensely powerful technologies that will have significant economic, social and geopolitical impacts.
LLMs such as ChatGPT4 as well as Google’s generative AI products such as Bard and PaLM2 have led to increased demands in the U.S. for AI regulation. This reflects heightened concerns over the risks of harm from LLMs, such as discrimination, bias, toxicity, misinformation, security, and privacy. Some of these risks are new to LLMs, while ChatGPT can amplify existing AI risks and increase potential harm. For example, while disinformation is already a problem across social media, ChatGPT4 and other LLMs can enable more targeted and effective disinformation campaigns that make it more challenging to determine the truthfulness of information. Privacy is a real issue online, but ChaptGPT4 could make it easier to infer personal identities, reducing privacy protection even further.
Highlights from the Senate Hearing on AI Regulation
Everyone testifying at the hearing called for some regulation of AI, with important differences. Both Altman and Christina Montgomery (Chief Privacy & Trust Officer for IBM) emphasized the key responsibility of businesses developing AI to mitigate harm. Montgomery outlined the internal governance processes at IBM, such as appointing a lead AI ethics official with responsibility for ensuring responsible AI and creating an AI Ethics Board to guide implementation. Altman discussed how Open AI manages the potential risk of harm. This includes removing personal data from training sets (where feasible), conducting extensive internal testing and evaluation before releasing a model, and relying on human feedback to further improve the model after releasing a model. Additionally, OpenAI does not release the full code, weights, and data of ChatGPT4, only making it available via an Application Programming Interface (API), though with scope for further training and tailoring by third parties for specific use cases.
When it came to how government should regulate AI—and LLMs in particular—Altman suggested that the government should focus on safety requirements before and after testing, working with other governments to agree on a common approach to licensing and auditing these AI models. Montgomery argued for regulation that is risk-based and use-case-focused, instead of regulating the technology itself. This is consistent with the EU approach in its AI Act, where high-risk use case such as using a chatbot in job interviews would be regulated more stringently than a chatbot making a flight reservation. She also called for requirements that people are informed when they are interacting with an AI system. Google’s proposal to watermark all images generated by Google AI could help reduce the effectiveness of misinformation. Gary Marcus, a Professor Emeritus at New York University, was the most concerned about the potential harms of ChatGPT4 and advocated most strongly for government regulation. However, in his testimony, besides saying don’t trust the companies to self-regulate, he more closely focused on collaboration between government and scientists in evaluating the AI models.
Why the US needs to regulate AI
U.S. regulation of the risks of harm from AI is clearly needed. However, the processes for developing AI regulation increasingly stand in contrast to the current zeitgeist—where AI systems are becoming increasingly powerful and having impact much faster than government can react. This raises the question as to whether the government is even capable of regulating AI effectively. Yet, making progress in regulating AI will be key if the U.S. wants to lead on international cooperation in AI governance.
Commentary
The US government should regulate AI if it wants to lead on international AI governance
May 22, 2023