Sections

Commentary

California charts the future of AI

Darrell M. West
Darrell West
Darrell M. West Senior Fellow - Center for Technology Innovation, Douglas Dillon Chair in Governmental Studies

September 12, 2023


  • On September 6, California Governor Gavin Newsom signed an executive order that brings his state to the forefront of artificial intelligence (AI) planning.
  • The order directs state agencies to adopt a proactive approach to AI regulation and to leverage the state’s procurement power to promote trustworthy AI principles.
  • The order also elevates fairness and equity as important principles in AI products and services and prioritizes workforce development as a crucial component of AI deployment.
A driverless taxi operated by Cruise is seen on the road of San Francisco, California.
A driverless taxi operated by Cruise is seen on the road of San Francisco, CA. (Photo by Michael Ho Wai Lee / SOPA Images / Sipa USA.)

California has long been a trendsetter on public policy. Over the years, it has pioneered new approaches to taxes, carbon neutrality, the gig economy, and privacy protection, among other issues. In both liberal and conservative directions, state leaders from Ronald Reagan to Jerry Brown have developed policies that often became bellwethers for other jurisdictions and had impacts far beyond the Golden State.

It therefore comes as little surprise that the state has moved to the forefront of artificial intelligence (AI) planning through Governor Gavin Newsom’s new executive order. Among its key features are creative efforts to rethink regulation, government procurement, ethics, and workforce development. The order represents a thorough and comprehensive approach that raises important issues in terms of the need for AI guardrails.

Regulation

The executive order notes that California is home to 35 of the top 50 AI companies in the world. As such, it recognizes the state has a unique responsibility to promote innovation, as well as responsible AI policies and regulations. It notes that AI algorithms have the potential to unleash remarkable benefits for the public good while also posing substantial risks of bad actors, unintended consequences, and threats to democratic and legal processes.

In line with calls from many tech leaders, the document directs state agencies to examine the impact on vulnerable communities, the threats created by high-risk applications, and challenges imposed by new developments in generative AI. It is consistent with the nation’s move away from the libertarian approach that has dominated U.S. tech policies for decades in favor of responsible regulation that improves transparency, privacy, safety, and equity. It imposes specific 2024 deadlines to generate concrete and actionable recommendations to guide future AI development. The order likely will lead to new rules, processes, and principles that will bind all the tech companies that operate within California and sell products to government agencies.

Government procurement

Recognizing the tremendous purchasing power of California’s $310.8 billion budget, the executive order directs the state’s Operations Agency, Department of General Services, Department of Technology, and Cybersecurity Integration Center to find ways to reform public sector procurement so that agencies consider uses, risks, and trainings needed to improve AI purchasing. In particular, it notes the principles outlined in the White House’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework and says state rules should address “safety, algorithmic discrimination, data privacy, and notice of when materials are generated by GenAI.”

The importance of focusing on government procurement is that state leaders recognize tech companies don’t want AI systems for California that differ substantially from those used in Illinois, New York, Texas, or Florida. Firms cannot have 50 different sets of products for the various American states. In the absence of clear national guidelines, the standards that California imposes will likely become de facto national or even global standards due to its large purchasing power. Firms rely on state agencies as customers will likely endeavor to meet any specifications set by those agencies across all their products and services, which means that a single state’s consumer-friendly procurement standards could have outsized ramifications elsewhere.

Ethics

With new advances in generative AI, machine learning, data analytics, and computer vision, it is crucial to develop ethical standards that guide AI development and ensure such products protect privacy, safety, and transparency. Although there has been some regulatory oversight of mergers and consumer-related activities, private companies have had considerable leeway to develop the products they wanted, regardless of the risks to consumers, competitors, or national security. But because of the broad AI ramifications for communities, society, and commerce, the California rules lead state agencies to now consider how digital products affect personal privacy, racial or gender bias, and the equity of their operations and decision-making. The order elevates fairness and equity as important principles in AI products and services and puts firms on notice that their algorithms must be fair and equitable in what they do.

Workforce development

One of the important features of this executive order is its recognition of likely substantial AI consequences for the workforce. While there are legitimate debates over how many jobs could be lost, how work will get transformed, and what types of new jobs will be created, the order lays down a benchmark for states to not permit the technology to leave large segments of the population behind in the AI revolution. Whatever the impact for society and the economy, the order proposes that analyses of job consequences be done with data on what possible losses mean for job retraining, adult education, and workforce development.

People likely will need new support for job retraining and lifelong learning that helps workers reskill and upskill for the digital era. It no longer is sufficient mainly to invest in education for people up to age 25, but rather there is a need to develop new job skills at ages 30, 40, 50, and 60. My sense is that people will have to learn new skills throughout their adult lives, and government programs must understand that reality and put policies and resources in place that facilitate adult education.

The probability of action

In summary, Newsom’s new executive order is not likely to produce reports and recommendations that gather dust on a shelf. The order creates timelines, directives, and principles that stand a good chance of implementation by California state agencies and elsewhere. With large Democratic majorities in the State Assembly, legislators may even overcome the polarization and stalemate that limits action at the national level. If California agencies take this order seriously, the state will move towards a proactive approach to AI that differs completely from the reactive policy approach that characterized government action many years ago on social media. Policy inaction in the early days of social media platforms created a current dystopia of extremism, polarization, and disinformation that continues to plague the U.S. and the world. Let’s hope state and national policymakers don’t make the same mistake on AI. Leaders must figure out appropriate goals and how to get there to access the benefits of artificial intelligence while avoiding the serious risks we already can see.

Authors