For all the energy and excitement surrounding new AI regulation in the U.S., the United Kingdom, and the European Union, China is first out of the box with a regulatory structure for AI, including for the new generative AI services that burst onto the scene less than a year ago with the release of ChatGPT to the public. Engagement with China’s regulators and experts on their experience developing and implementing AI law and policy would be in the best interests of Western regulators as they work to set their own policies. As China expert Matt Sheehan said in a recent Foreign Policy comment, the United States and its allies “can actually learn a lot from China’s approach to governing AI.”
China’s internet regulator, the Cyberspace Administration of China (CAC) has established a series of regulatory measures for advanced algorithms. In March 2022, it promulgated rules for recommendation algorithms that included a requirement for filing information with a government registry and a user right to opt out of personalization. In November 2022, it adopted deep synthesis rules for synthetically generated images, video, audio, and text. This was just before the release of ChatGPT in December, but the agency quickly reacted to the new issues raised by ChatGPT. In April 2023, it proposed new rules for developers and deployers of generative AI.
Many measures in this proposal are familiar from discussions of AI ethics, including requirements for labeling of AI content, non-discrimination, and the protection of privacy and intellectual property. The new AI proposal also required those providing generative AI services to file information with the previously established algorithm registry.
A revised version was published in July 2023 and became effective in August. While the requirements are binding, the published version is still labeled “interim” suggesting that, consistent with China’s iterative approach to AI regulation, they might be replaced by another version of AI rules. Still, Western press reports recognized that this enforceable interim rule gave China a “first-mover advantage in AI regulation.” Since then, the CAC has authorized five Chinese companies to provide generative AI services to the public—two established firms and three startups.
Of course, China’s rigid ideological control of public discourse is based on a level of government surveillance, censorship, and propaganda that is simply unacceptable in liberal democracies. This ideological control is in full view in the new AI rules. They require that AI generated content must be in accordance with Chinese law and that the providers of generative AI services are responsible for ensuring this legality. In practice, this would mean AI companies would be as aggressive and arbitrary in conforming AI content to “socialist values” as Chinese social media companies are.
Any public engagement with China regulators and experts on AI law and policy risks normalizing and legitimizing this draconian speech policy that is at the heart of the contrast between digital democracy and digital authoritarianism.
This might be especially true of engagement with the CAC. As China expert Jamie P. Horsley and reporters AJ Caughey and Shen Lun have warned, the CAC is a hybrid entity, part normal administrative agency and part Communist Party institution. Its functioning is more opaque than ordinary administrative entities and its decisions might reflect Party priorities rather than professional regulatory judgments. Engagement with Chinese experts and other regulators knowledgeable about AI issues in addition to CAC officials would help to offset this possible political bias.
China expert Samm Sacks noted in a recent Brookings event that other Chinese institutions will be heavily involved in the still evolving development of Chinese AI regulation. In a recent Carnegie Foundation report, Matt Sheehan says that the Ministry of Science and Technology, the China Academy for Information Communications Technology, and Tsinghua University’s Institute for AI International Governance have been and will continue to be involved in AI policy development. Thus, any engagement with China on AI law and policy should include these other institutions.
With eyes wide open to these risks, it would be overwhelmingly important for Western policymakers to see what can be learned from China’s experience regulating AI. China is open to learning from developments abroad. For instance, its privacy law, the Personal Information Protection Law, which went into effect in November 2021, is heavily indebted to the Europe Union’s 2018 General Data Protection Directive. Just this last week in response to international business concerns, the CAC proposed a major relaxation of its data export rules. As Matt Sheehan says, a “willingness to learn from a rival can be a major advantage in geopolitics.” If the U.S. and its allies want to shape AI law and policy domestically and internationally, it should do the same.
To illustrate how AI policymakers in the U.S., Europe, and the U.K. can learn from China’s experience, I want to focus on four issues that remain unresolved in Western domestic debates but where China has reached at least an interim conclusion.
The first is the CAC’s decision to have a licensing regime. Over the summer, some leading AI companies in the U.S., including OpenAI, have lobbied for licensing to restrict the development of AI models to trusted vendors. Earlier this month, Microsoft’s Vice Chair and President Brad Smith testified in favor of AI licensing before a Senate Judiciary Committee hearing. In the background is a thoughtful report first published in July from a group of AI researchers that makes a case for licensing or supervisory regulation of “frontier models” which could represent a threat to public safety.
But a licensing regime almost by definition limits experimentation and innovation and increases the chances that large, well-capitalized incumbents will dominate the new AI landscape. Just last week, Great Britain’s Competition and Markets Authority released a report warning about concentration in the market for foundation models, which are the general-purpose AI models that can be further trained with new data for specific purposes. The report strongly suggested that the easy availability of open-source AI systems, where new developers can access the code of AI models and develop them further, is vital to maintaining a vigorously competitive system.
And yet China’s CAC went in the direction of licensing. Why? Part of the answer has to be that the CAC defaults to a licensing regime to carry out its mandate to ensure the effective operation of China’s information control system. For instance, it maintains a list of news sources that can legally be shared by digital media.
Yet, China has the same or maybe even a greater interest in seeing the capabilities of AI systems expanded and used throughout the country’s economy, which is undermined by a licensing regime that would favor established firms. The CAC has awarded two licenses to incumbents and three to startups, which suggests no favoritism toward incumbents and encouragement for innovative startups. Perhaps this reflects a feature of Chinese regulation, namely, that incumbents do not have the same ability to gain regulatory favor in China that they do in the U.S. and to some degree in the U.K. and Europe. But perhaps China has found a way to have its cake and eat it too. It might have found a way to gain the safety protection of a licensing regime without losing the spur to innovation that comes from competitive startups.
A second issue where China has reached a judgment and Western policymakers still struggle to define a policy is its decision not to apply a licensing regime to companies that “research, develop, and use generative AI technology, but have not provided generative AI services to the (mainland) public.” China’s initial rules applied much more broadly to “the research, development, and use of products with generative AI functions, and to the provision of services to the public.” In other words, the initial rules covered companies that were merely conducting research and development of AI models, long before any deployment to the public had taken place. This initial policy has some basis in the urgent need to manage public safety risks. Uncontrolled AI experimentation could lead to dangerous AI models escaping into the wild. A licensing regime at a very early stage would encourage safe and responsible model development.
But it is hard to see how developers of models could comply with safety rules, such as disclosures to users before they have deployed their models to the public. Moreover, the inevitable bureaucratic delays in getting approval for R&D on AI models would hinder innovation at a time when, for reasons of international competition, speed seems paramount.
In the final set of regulations, the CAC changed its mind. It determined that the AI licensing requirement should apply only to companies providing generative AI services to the public and not to companies engaged in research and development or using it for internal organizational operations. This change seems to suggest the Chinese regulators prioritized innovation and speed over bureaucratic process. But by leaving out AI research and development, the CAC may have given up any ability to have insight into the AI development process. Does it have any way to check whether pre-deployment conduct on the part of developers or internal organizational users is in fact safe? According to Matt Sheehan, recently released guidance from a key Chinese standards organization on how to comply with the generative AI regulation would give the regulator some insight into models that are used by public-facing companies by requiring that if a company is “building on top of a foundation model, that model must be registered” with the regulator.
The third issue where Western policymakers might learn from their Chinese counterparts in is why China decided to have a single AI regulator. Some in the U.S., including Elon Musk, are pushing for a new regulatory agency to handle all AI law and policy issues. The latest version of the European Union’s AI Act seems to adopt this view as well, requiring each member country to establish a single agency to handle all AI questions. In favor of this is the consideration that centralization rather than regulatory fragmentation seems a sensible way to ensure a uniform approach to the risks of this promising new technology.
But AI researchers have known for a long time that the risks and benefits of AI, which, after all, is just a set of statistical procedures, arise in concrete form when the technology is actually used in practice. Companies can use generative AI in a wide variety of applications: to improve search service, to replace copywriters and graphic designers, to write headlines, to draft job descriptions or performance reviews, and to help doctors to deal with unrelenting workloads, high administrative burdens, and new clinical data. How can a single regulator effectively license all these different AI companies and assure that they will comply with the rules in all these myriad lines of business?
For that reason, the U.S. has, under several administrations, assigned AI regulation to specialized agencies. If AI is used for credit granting, the Consumer Financial Protection Bureau will address it. If AI is used for consumer fraud or deception, the Federal Trade Commission is responsible. If employers use AI in hiring and promotion decisions, the Equal Employment Opportunity Commission will ensure they do not discriminate against protected classes. The National Institute of Standards and Technology has issued a non-binding AI risk management framework, but has no role as a central regulator. The White House has secured voluntary commitments from AI companies to manage AI risks and has not yet shifted to universal mandates. Delegation of regulatory authority to specialized agencies is still the official U.S. position.
That’s the approach in the U.K. as well. In March, the U.K. government issued a white paper adopting a context-based form of regulation and, in August, the House of Commons Science & Technology Committee endorsed agency-based AI regulation. The Department for Science, Innovation and Technology has established a Frontier AI Taskforce to focus on AI safety, and this group is seeking deep model access so that government researchers can engage in model evaluations. But, so far, the taskforce has no regulatory authority.
Policymakers struggling with this issue in the European Union, the U.K., and the U.S. would benefit from understanding why Chinese policymakers opted for giving a single agency full regulatory power over AI. It might simply reflect the Chinese emphasis on centralized control over digital content. If so, it would make sense for them to house the regulation of AI generated content in the same regulatory agency that controls digital content generally. But it would be helpful to know if it was more than that.
The fourth area of interest is the CAC decision to require providers of generative AI services to take steps to increase the accuracy of training data. In the initial proposal, the CAC said that providers of AI content generation services must be able to “ensure the data’s veracity, accuracy, objectivity, and diversity.” This requirement seemed destined to limit the development and deployment of powerful and capable AI models, since a substantial percentage of training data is less-than-accurate material from the internet. Much of the power of AI systems would be lost if training were limited only to accurate data.
In the final rule, the CAC actually stepped back. It now requires only “effective measures” to “increase the truth, accuracy, objectivity, and diversity of training data.” Why did the agency back off an absolute requirement to use accurate training data? Technical reasons might be part of the answer, since social media data and public image databases are often used for training even though they are known to contain errors. Moreover, it seems silly to require complete accuracy in training data if the degree of accuracy needed in the output, such as music, book, or movie recommendations or advertisement targeting, is not that high. But there is another plausible policy that China does not seem to have considered. China’s regulators could have required AI companies to use accurate data when accuracy in output is required by law in consequential applications. It seems impossible to ensure accurate output if the training data is not accurate. Why didn’t the Chinese agency choose this alternative?
The last area to explore with Chinese regulators and experts is the mandate in the new AI rule requiring AI companies to “uphold core socialist values” in providing their service. The key point is not the limitation of generated content to approved values. It is that the companies that provide services to end users, not the model developers or the end users of the service, “bear responsibility as the producer of the content generated by the product.” When these end-user focused companies discover illegal content, the law mandates that “they shall promptly employ measures to address it…”
This requirement on AI providers is in keeping with the long-standing principle of Chinese internet law that social media companies and other providers of online services are responsible for ensuring that their systems are free of illegal content. The vast and undefined range of content held to be illegal under the Chinese system has led these other digital companies to be very aggressive and arbitrary in removing and blocking material on their systems. Under this new law, AI providers are likely to behave in a similar way.
Of course, the U.S. and its allies disagree with China on how much content it makes illegal. They would never allow the government to require private sector entities to promulgate “socialist values” or any other values for that matter. This divide over speech policy may be the most fundamental disagreement between the Chinese system of law and government and that prevalent in the U.S. and Europe.
But the U.S. and its allies do have a narrower range of material that is illegal—child porn, terrorist material, fraud, defamation, privacy invasion, to name a few. Assigning responsibility for preventing the distribution of this material is at the heart of current policy discussions in the U.S. on reforming Section 230, which provides online companies with immunity from liability for material posted by their users. Europe has had its say on this debate. In its 2022 Digital Services Act, the EU reaffirmed a knowledge standard, according to which online companies have immunity for the illegal material on their systems if they do not know about it and if they act expeditiously to remove it once they become aware of its presence on their systems.
The U.S. is also amid a further debate on whether Section 230 immunity applies to providers of generative AI services, or if it does not, whether a new form of liability immunity should be created for these services. Neither the U.S. nor Europe has sorted out the liabilities of AI developer, AI deployer and end user for AI-generated illegal material.
But China has sorted it out, at least initially. It seems to put all the responsibility for illegal content on the company making the AI system available to the public. Did China get it right by assigning this responsibility to those who provide generative AI services? Or should the responsibility be farther down the AI value chain and fall in part on developers who trained the foundation models used in the particular applications? And surely end-users who break the law by evading content controls built into AI systems should bear some responsibility. The thinking of China’s regulators might inform the discussion of these vital liability issues in the U.S. and Europe.
These examples certainly seem to suggest that Western policymakers could learn from discussions with their Chinese counterparts on specific AI policy issues. But the current geopolitical tensions speak against such engagement with Chinese regulators on these issues. The disagreement between the U.S. and its allies over Chinese speech policy is fundamental, and it is hard to see how these speech issues could be avoided or just put aside as an area of agreeing to disagree.
Moreover, in its 2022 Declaration for the Future of the Internet the Biden Administration has framed international tech policy as a struggle between digital authoritarianism and digital democracy. This framing makes it more difficult to engage on specific tech policy issues with countries such as China that are labeled as digital authoritarians.
In her new book entitled Digital Empires, Columbia Law School professor Anu Bradford defends this broad-brush framing of global tech policy. She imagines a bipolar world where the U.S. converges toward the European model of human rights and together the two democracies engage in a “battle for the soul of the digital economy” against the Chinese state-driven model.
Despite these risks of legitimizing digital authoritarianism, the advantages of sharing information and approaches on AI regulation with China seem to me to be paramount. Chinese AI regulators are the farthest along in establishing a policy framework. They actually have it up and running, while Europe, the U.K., and the U.S. are still thinking about it. Why not learn from their experiment? As Samm Sacks put it in the recent Brookings event, the current Chinese AI regulatory framework is “a lab, a Petri dish” for AI law and policy in the West.
The domestic policymakers in liberal democracies struggling to establish AI policy would gain valuable insight if they engaged in conversations with the Chinese officials who know the most about the strengths and weaknesses of the specific regulatory choices they have made and could share the considerations that led them to make these choices.
With the right adjustments for the large and very real differences in governing systems, policymakers in Europe, the U.K., and the U.S. might learn some valuable lessons from the Chinese experience on having a licensing regime at the deployer level, run by a single agency, requiring effective measures to increase the accuracy of training data, and putting the responsibility for policing illegal content on the AI model deployer.
China seems to be open to working with the international community on AI issues. In July, Chinese foreign ministry spokesperson Wang Wenbin said, “China is willing to enhance communication and exchanges with the international community on AI security governance, promote the establishment of an international mechanism with universal participation, and form a governance framework and standards that share broad consensus.”
We need not and should not move immediately to an international governance mechanism. Most issues are still unresolved at the domestic level and until they are closer to being decided, it might not be possible or necessary to even try to create a broad global consensus. But sharing regulatory experience could be a good way to explore whether any kind of global convergence is possible.
The time to begin this engagement is now while the contours of AI policy in the U.S. and Europe are still forming. The United Kingdom is holding a promising Global AI Summit on November 1 and 2. Prime Minister Rishi Sunak has taken the first step toward regulatory AI engagement with China by inviting China to attend this international gathering. This first step should be followed by more from policymakers in the U.S. and Europe.
Commentary
The US and its allies should engage with China on AI law and policy
October 19, 2023