Sections

Commentary

Artificial intelligence is another reason for a new digital agency

FILE PHOTO: A visitor wears virtual reality goggles at the World Artificial Intelligence Cannes Festival (WAICF) in Cannes, France, February 10, 2023.  REUTERS/Eric Gaillard

The torrid pace of artificial intelligence (AI) developments contrasts with the torpid processes for protecting the public interest impacted by the technology. Private and government oversight systems that were developed to deal with the industrial revolution are no match for the AI revolution.

AI oversight requires a methodology that is as revolutionary as the technology itself.

When confronted with the challenges of industrial technology, the American people responded with new concepts such as antitrust enforcement and regulatory oversight. Thus far, policymakers have failed to address the new realities of the digital revolution. Those realities only become more daunting with AI. The response to intelligent technology cannot repeat the regulatory cruise control we have experienced to date regarding digital platforms. Consumer facing digital services, whether platforms such as Google, Facebook, Microsoft, Apple, and Amazon, or AI services (being led by many of the same companies) require a specialized and focused federal agency staffed by appropriately compensated experts.

What Worked Before is Insufficient

Dusting off what worked previously in the industrial era to protect consumers, competition, and national security isn’t sufficient when it comes to the new challenges of the AI era. Specialized expertise is required to understand not just how AI technology works, but also the social, economic, and security effects that result. Determining accountability for those effects while encouraging continued development walks a tightrope between innovation and responsibility. Relying on old statutes and regulatory structures to respond with the speed and expansiveness of AI is to expect the impossible and invite the inevitable public interest harm when old systems cannot keep pace and private interests are allowed to determine what is acceptable behavior.

In a similar manner, stopping or slowing AI development is as futile as stopping the sun from rising. In the original information revolution that followed Gutenberg’s printing press, the Catholic Church tried and failed to slow the new technology. If the threat of eternal damnation wasn’t adequate to stop the inertia of new ideas and economic opportunity back then, why do we think we can stop the AI revolution now?

The response of national policy leaders to AI has been bipartisan. Senate Majority Leader Chuck Schumer has called for guidelines for review and testing of AI technology prior to its release. House Speaker Kevin McCarthy’s office points to how he took a group of legislators to MIT to learn about AI. A presidential advisory committee report concluded, “direct and intentional action is required to realize AI’s benefits and guarantee its equitable distribution across our society.” The Biden administration’s AI Bill of Rights was a start, but with rights come obligations and the need to establish the responsibilities of AI providers to protect those rights.

Federal Trade Commission (FTC) Chair Lina Khan, who has been appropriately aggressive in exercising her agency’s authorities, observed, “There is no AI exception to the laws on the books.” She is, of course, correct. The laws on the books, however, were written to deal with issues created by the industrial economy. The principal statute of Chairwoman Khan’s own agency was written in 1914.

Beyond the obvious statutory limitations, sectoral regulation that relies on existing regulators such as the FTC, Federal Communications Commission (FCC), Securities and Exchange Commission (SEC), Consumer Financial Protection Board (CFPB), and others to deal with AI issues on a piecemeal sector-by-sector basis should not be confused with establishing a national policy. Yes, these agencies will be responsible for specific effects in their specific sectors, but sectoral authority determined by independent agency action does not represent the establishment of a coherent overall AI policy.

The Commerce Department’s National Telecommunications and Information Administration (NTIA) is running a process to solicit ideas about AI oversight. It is an important step forward. But the answer is before us. What is needed is a specialized body to identify and enforce the broad public interest obligations for the AI companies.

New Regulatory Model

While the headline is a new agency, the real regulatory revolution must be in how that agency operates. The goal of AI oversight should be two-fold: to protect the public interest and promote AI innovation. The old top-down micromanagement that characterized industrial regulation will slow the benefits of AI innovation. In place of old utility style micromanagement AI oversight demands agile risk management.

Such a new regulatory paradigm would work in three parts:

  • Identification and quantification of risk: The effect of AI technology is not uniform. AI that aids search choices or online gaming has an impact that is far different from AI that affects personal or national security. Oversight should be bespoke, tailored to the need, rather than one size fits all.
  • Behavioral codes: In lieu of rigid utility style regulation, AI oversight must be agile and innovative. Once the risk is identified there must be behavioral obligations designed to mitigate that risk. To arrive at such a code of conduct requires a new level of government-industry cooperation in which the new agency identifies an issue, convenes industry experts to work with the agency’s experts to come up with a behavioral code, and determine whether that output is an acceptable answer.
  • Enforcement: The new agency should have the authority to determine whether the new code is being followed and impose penalties when it is not.

Known Unknowns

The future effects of AI are unknown. What is known is what we have learned thus far in the digital era about how failing to protect the public interest amidst rapidly changing technology leads to harmful effects.

Once again, we are watching as new technology is developed and deployed with little consideration for its consequences. The time is now to establish public interest standards for this powerful new technology. Absent a greater force than the commercial incentive of those seeking to apply the technology, the history of the early digital age will repeat itself as innovators make the rules and society bears the consequences.