Sections

Commentary

Senate hearing highlights AI harms and need for tougher regulation

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' on Capitol Hill in Washington, U.S., May 16, 2023.  REUTERS/Elizabeth Frantz

Yesterday’s testimony by Open AI’s CEO Sam Altman at the Senate Judiciary Subcommittee on Privacy, Technology, and the Law shows the importance of generative artificial intelligence (AI) and the sensitivity surrounding its development. Along with tools launched by other firms, ChatGPT has democratized technology by bringing tremendous computing power to search, data analysis, video and audio generation, software development, and many other areas. Generative AI has the power to alter how people find information, generate new audio and videos, create new products, and respond in real time to emerging events.

At the same time, though, several issues have emerged that concern consumers, academic experts, and policymakers. Among the worrisome problems include harmful content, disinformation, political favoritism, racial bias, a lack of transparency, workforce impact, and intellectual property theft. Altman’s testimony, along with that of IBM Vice President Christina Montgomery and New York University Professor Gary Marcus, provided a chance to explain generative AI and gave legislators an opportunity to express their reservations about its impact on society, the economy, and elections.

In this post, I review the key takeaways from that Senate hearing and next steps for the emerging technology. In general, there was widespread agreement about the risks of disinformation, biased decision-making, privacy invasions, and job losses. Most of the speakers saw these matters as quite serious and in need of meaningful action. More surprising were the oftentimes bipartisan calls for tougher regulation and greater disclosure of AI utilization. At a time when most Washington conversations are quite polarized and partisan, most of the lawmakers agreed there need to be stronger guardrails that safeguard basic human values, especially in the areas below.

Harmful ramifications

With AI becoming more ubiquitous in many sectors, there was widespread agreement about AI’s harmful content and worrisome ramifications. Generative AI brings sophisticated algorithms to ordinary consumers in the form of online prompts and templates, and thereby allows nearly anyone to create and disseminate false narratives. Many expect there to be a tsunami of disinformation in the 2024 elections as the close election provides incentives for a number of people and organizations to create fake videos, false audios, and incriminating texts with little regard for fairness or factual accuracy.

Several Senators spoke about problems of bias, loss of privacy, and potential job losses for those whose tasks can be replaced by AI algorithms. Currently, there is a lack of transparency regarding how AI operates, the training data on which it is based, and how it makes decisions. Many worried about ways AI could fuel mass manipulation and get people to think and act in distorted directions.

Subcommittee Chair Richard Blumenthal (D-CT) worried about “voice-cloning software” and dystopias that were “no longer fantasies of science fiction.” He cited the weaponization of disinformation, housing discrimination, and deep fake videos as among his top concerns. Ranking Member Josh Hawley (R-MO) built on these concerns and complained about the “power of the few” and the need to strike a balance between technology innovation and moral responsibility.

Tougher disclosure

Less expected was the fact that nearly all the lawmakers and outside speakers, including leaders from industry, called for tougher disclosure requirements. Altman and other speakers said consumers should be alerted when generative AI is used to create videos and audiotapes, and build other kinds of products. People need to know what is human-generated and what comes from algorithms as it might affect the way they view particular products.

Independent audits

Since AI remains in the early stages of testing, it is important to have external monitoring of model results. Large language models should be tested through third parties and results made available to the public. That will help people understand how AI is performing and which applications may be especially problematic. NYU’s Marcus noted the helpfulness of nutritional labels and said such labeling could be helpful on AI products and services.

Risk-related restrictions

IBM’s Montgomery made the case for risk-related regulation, noting that there could be different rules for different risks. High-risk AI warranted tighter oversight than low-risk applications. Cases that involved human safety, biomedical risk, or harms to specific populations require more in-depth analysis, monitoring, testing, and public review. Since the possible consequences could be quite dire, those applications warrant more detailed scrutiny.

Licensing requirements

There are a few human activities that are deemed so dangerous that societies require licenses to operate a car, hunt, fish, or start a business. Some speakers asked whether AI licensing would help mitigate possible harms and foster greater accountability among tech providers. Especially in cases of high-risk AI applications, several senators also felt that licensing requirements would be beneficial as long as these rules did not restrict open-source models or small businesses.

Ethics review boards

IBM’s Montgomery noted her firm has hired AI ethics experts and established an AI review board that monitors company product developments and assesses correspondence to important human principles. Having such mechanisms helps to ensure that human factors are put front and center, and AI products receive meaningful internal scrutiny.

Protection of intellectual property

Senator Marsha Blackburn emphasized the important of intellectual property protection and said creators should be compensated when their music, voices, and content is used to train AI. She cited the example of using AI to create a song that sounded like Garth Brooks and the result came out sounding like his song “Simple Man.”

A new regulatory agency

Running across these various suggestions was the theme of whether the United States needs a new AI regulatory agency. Widespread usage of cars spurred the launch of the National Highway Safety Administration and the advent of television and radio spawned the Federal Communications Commission.

With AI innovation spanning so many different sectors and having tremendous ramifications for government, business, and consumers, it may be time to develop a new agency with the technical expertise to monitor, assess, and regulate AI. Failure to do that would result in sector-specific solutions where there are different AI rules for finance, health, transportation, education, housing, and employment. That “Tower of Babel” approach would be disjointed, disorganized, and ineffective in combatting AI ills and leave a number of people dissatisfied with the results.

Authors