Sections

Commentary

How Congress can secure Biden’s AI legacy

Nicol Turner Lee and Jack Malamud
jack Malamud
Jack Malamud Former Senior Research and Administrative Assistant

January 25, 2024


  • So far, national-level actions on AI regulation in the U.S. have come exclusively from the executive branch.
  • Without congressional support, the Biden administration’s AI policies may remain unenforceable and even temporary, subject to the whims of the next president.
  • The time has come for Congress to act on the various bipartisan bills that lawmakers have already proposed in order to continue the development of an AI governance regime and catch up to international competitors.
Senator Chuck Schumer (D-NY), the Senate Majority Leader, and Senator Todd Young (R-IN) speak to media after the conclusion of an Artificial Intelligence Insight Forum that focused on transparency, explainability, intellectual property, and copyright, at the U.S. Capitol, in Washington, D.C., on Wednesday, November 29, 2023.
Senator Chuck Schumer (D-NY), the Senate Majority Leader, and Senator Todd Young (R-IN) speak to media after the conclusion of an Artificial Intelligence Insight Forum that focused on transparency, explainability, intellectual property, and copyright, at the U.S. Capitol, in Washington, D.C., on Wednesday, November 29, 2023. Credit: Graeme Sloan/Sipa USA

Over the last year and a half, we have seen a flurry of activity around regulating artificial intelligence (AI). OpenAI’s release of ChatGPT, a generative AI-powered chatbot, brought the debate about AI regulation further into public view in the last year, increasing the urgency for governments in the United States and abroad to prescribe guardrails for existing and emerging technologies. Other providers offering products that incorporate generative AI tools quickly joined the market.

But despite a series of actions and proposals in the U.S., including President Biden’s recent Executive Order on AI, national guardrails on AI have stalled without congressional support. With Congress reconvened in 2024 and a national election underway, what will be done to operationalize President Biden’s guidelines around more secure and trustworthy AI, and will Congress be able to move forward near-term bipartisan AI legislation?

What has the White House done so far?

In October 2022, the White House Office of Science and Technology Policy (OSTP) published a “Blueprint for an AI Bill of Rights,” outlining nonbinding guidelines for consumer protections regarding AI. Although the White House announced that it would be followed by related actions from multiple government agencies, the Blueprint’s lack of enforceable guidelines and deliberately carved out law enforcement and national security—two sectors in which the harms of AI can be particularly severe—made it clear that it was only a partial step toward effective AI governance, as we have written previously.

Two months later, in January 2023, the National Institute of Standards and Technology (NIST) published its own AI Risk Management Framework (RMF). As our colleague Cameron Kerry explained, the AI RMF provides for a risk-averse, voluntary roadmap toward identifying and mitigating the risks of AI. It also puts forth seven characteristics of trustworthy AI: safe; secure and resilient; explainable and interpretable; privacy-enhanced; fair; accountable and transparent; and valid and reliable. To date, the NIST RMF is perhaps one of the most widely mentioned tools for designing and developing more ethical autonomous systems but has no legal and agency authority to require these principles’ adoption. This is despite NIST being widely considered as the agency to centralize AI oversight despite it having no rulemaking or enforcement authorities.

On October 30, 2023, one year after OSTP’s release of the Blueprint for an AI Bill of Rights, President Biden took a substantial step forward, signing an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This sweeping action implemented many of the Blueprint’s principles, issued guidance for federal agencies’ use and procurement of AI systems, required that developers of certain AI systems share safety test results with the government, instructed NIST to set standards for red-team testing, and provided “clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.” The Order also called on Congress to pass “bipartisan data privacy legislation.”

Tucked into the EO is also a directive to the Office of Management and Budget (OMB) to “issue guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government.” On November 1, 2023, OMB released a draft of its implementation guidance, which would require federal agencies to develop and publish individual strategies for the use of AI, implement impact assessments for AI systems, and designate a Chief AI Officer to manage AI oversight. Along with risk management, the guidance prioritizes the importance of developing an AI workforce by recruiting “individuals with diverse perspectives” and conducting internal training. While making AI guidance more substantial in federal agencies is the right step, this type of initiative could face regulatory scrutiny if the Supreme Court decides to strip them of their regulatory duties.

Where is Congress?

As these actions by the White House and other federal agencies work toward establishing a governance regime for AI, one major component is still missing: congressional action.

Of course, legislators have not been entirely asleep at the wheel. Some lawmakers have already proposed various acts to address AI, encompassing issues such as disclosure requirements, watermarking, and intellectual property. In May 2023, Senators Amy Klobuchar (D-MN), Cory Booker (D-NJ), and Michael Bennet (D-CO), along with Congresswoman Yvette Clarke (D-NY-11), introduced the REAL Political Ads Act, which would require a disclaimer on political ads that use images or video generated by AI. One month later, Representatives Ted Lieu (D-CA), Ken Buck (R-CO), Anna Eshoo (D-CA), and Senator Brian Schatz (D-HI) introduced the National AI Commission Act, which would create a bipartisan, blue-ribbon commission to recommend steps toward AI regulation. And in October 2023, Senators Schatz and John Kennedy (R-LA) introduced the AI Labeling Act, which would require developers to include “clear and conspicuous” notices on AI-generated content. Also in October, Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC) introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, which is intended to protect artists from generative AI by prohibiting “unauthorized digital replicas of individuals in a performance.”

Meanwhile, Senator Chuck Schumer (D-NY) has hosted nine AI Insight Forums, in which industry leaders and experts have discussed topics related to AI in closed-door sessions. Dr. Turner Lee, one of the co-authors of this blog, participated in the seventh Insight Forum on transparency and intellectual property.

But the fact remains that Congress has yet to pass any legislation on AI, allowing the U.S. to cede the initiative on this issue to the European Union (EU), which recently agreed on the AI Act, the world’s most comprehensive AI legislation. With such a vacuum in federal policymaking, state and local governments have rushed to fill the gaps. In 2023, state lawmakers introduced over 440% more AI-related bills than in 2022, with states like Texas and Connecticut passing legislation, governors in California and Pennsylvania issuing executive orders, and even cities like Seattle and New York creating their own regulations governing AI. While state and local governments are establishing their own guidance, the lack of federal legislation has created a patchwork of AI rules that will ultimately foster uncertainty for industry and consumers alike since it will result in different rules in various jurisdictions.

Another unfortunate consequence of congressional inaction is that the nonbinding guidelines in President Biden’s recent Executive Order may remain toothless and transitory, subject to reversal by the next administration. Although President Biden has ordered the development of standards and best practices for AI safety and security and secured a series of voluntary commitments from private sector companies, these standards are enforceable only on federal agencies. Furthermore, without congressional action, it remains an open question whether the current administration’s guidance will be enshrined in law or be subject to change under new presidential leadership—even best practice programs, advisory committees, and the like initiated under the Biden-Harris administration.

What should Congress do?

The time has come for Congress to act. The previous actions of the Biden-Harris White House have provided foundational building blocks for what could constitute legislation for a bipartisan group to consider and adopt. But doing so will require lawmakers to distill what in these bodies of work can be prioritized and ushered in with some degree of consensus, which has not been done yet on federal data privacy legislation. Further, Congress should first prioritize incremental and immediate legislation that can fix what most consumers care about now—transparency in opaque AI systems and autonomous decisions impacting democracy.

On these two issues alone, the National AI Commission Act could be a good place to start, as a bipartisan platform could assemble the expertise and analyses necessary to identify the most pressing AI-related issues for regulation and prioritize them by short- and long-term goals. Alternatively, Congress could prioritize the upcoming national election by regulating the use of AI, and generative AI specifically, in elections via the bipartisan Protect Elections from Deceptive AI Act, proposed by Senators Klobuchar, Hawley (R-MO), Coons, and Susan Collins (R-ME). The bill would ban the use of AI to generate “materially deceptive content falsely depicting federal candidates in political ads to influence elections.”

In this same vein, policymakers could also make headway on the potential for digital watermarks to help identify AI-generated content in tandem with the private sector that has been leading its development, although it will be important to take care not to overestimate the limited potential of this technology. Another option for catalyzing action on AI legislation could be for lawmakers to organize party leaders at the committee and caucus levels. That way, they can coordinate thinking and craft proposals capable of withstanding the scrutiny of individual political agendas and the changing priorities that the upcoming elections are bound to bring.

Policymakers have already gathered a wealth of information about various AI-related issues through the White House proposals, those from federal agencies, official Congressional hearings, Senator Schumer’s AI Insight Forums, among other state-led actions. Congress should consider prioritizing consumer confidence in AI tools while allowing some leeway for AI to be put to good use. However, lingering proposals and the failure to act will only encourage bad actors to fill the AI space with algorithms subject to few rules and inadequate human guardrails.

Authors