As an emerging technology, artificial intelligence is pushing regulatory and social boundaries in every corner of the globe. The pace of these changes will stress the ability of public governing institutions at all levels to respond effectively. Their traditional toolkit, in the form of the creation or modification of regulations (also known as “hard law”), require ample time and bureaucratic procedures to properly function. As a result, governments are unable to swiftly address the issues created by AI. An alternative to manage these effects is “soft law,” defined as a program that creates substantial expectations that are not directly enforceable by the government. As soft law grows in popularity as a tool to govern AI systems, it is imperative that organizations gain a better understanding of their current deployments and best practices—a goal we aim to facilitate with the launch of a new database documenting these tools.
Why AI soft law matters
The governance of emerging technologies has relied on soft law for decades. Entities such as governments, private sector firms, and non-governmental organizations have all attempted to address emerging technology issues through principles, guidelines, recommendations, private standards, best practices, among others. Compared to its hard law counterparts, soft law programs are more flexible and adaptable, and any organization can create or adopt a program. Once programs are created, they can be adapted to reactively or proactively address new conditions. Moreover, they are not legally tied to specific jurisdictions, so they can easily apply internationally. Soft law can serve a variety of objectives: It can complement or substitute hard law, operate as a main governance tool, or as a back-up option. For all these reasons, soft law has become the most common form of AI governance.
The main weakness of soft law governance tools are their lack of enforcement. In place of enforcement mechanisms, the proper implementation of soft law governance mechanisms relies on aligning the incentives of program’s stakeholders. Unless these incentives are clearly defined and well-understood, the effectiveness and credibility of soft law will be questioned. To prevent the creation of soft law programs incapable of managing the risks of AI, it is important that stakeholders consider the inclusion of implementation mechanisms and appropriate incentives.
What we found
As AI methods and applications have proliferated, so too have soft law governance mechanisms to oversee them. To build on efforts to document soft law AI governance, the Center for Law, Science and Innovation at Arizona State University is launching a database with the largest compilation, to date, of soft law programs governing this technology. The data, available here, offer organizations and individuals interested in the soft law governance of AI with a reference library to compare and contrast original initiatives or draw inspiration for the creation of new ones.
Using a scoping review, the project identified 634 AI soft law programs published between 2001 and 2019 and labeled them using up to 107 variables and themes. Our data revealed several interesting trends. Among them, we found that AI soft law is a relatively recent phenomenon, with about 90% of programs created between 2017 and 2019. In terms of origin, higher-income regions and countries, such as the United States, United Kingdom, and Europe, were most likely to serve as a host to the creation of these instruments.
In the process of identifying stakeholders responsible for generating AI soft law, we found that government institutions have a prominent role in employing these programs. Specifically, more than a third (36%) were created by the public sector, which is evidence that usage of these tools is not confined to the private sector and that they can behave as a complement to traditional hard law in guiding AI governance. Multi-stakeholder alliances involving government, private sector, and non-profits and non-profit/private sector alliances followed with a 21% and 12% share of the programs, respectively.
We also looked at soft law’s reliance on the alignment of incentives for implementation. Because government cannot levy a fee or penalty through these programs, stakeholders participating in soft law have to voluntarily agree to participate. Considering this, about 30% of programs in the database publicly mention enforcement or implementation mechanisms. We analyzed these measures and found that they can be divided into four quadrants: internal vs. external and levers vs. roles. The first dimension represents the location of the resources necessary for a mechanism’s operation, whether it uses those located within an organization or externally through third-parties. Meanwhile, levers are the toolkit of actions or mechanisms (e.g. committees, indicators, commitments, and internal procedures) that an organization can employ to implement or enforce a program. Its counterpart is characterized as roles. It describes how individuals, the most important resource of any organization, are arranged to execute the toolkit of levers.
Finally, in addition to identifying a program’s characteristics, we labeled the text of programs. This was done by creating 15 thematic categories divided into 78 sub-themes that touch upon a wide variety of issues and make it possible to scrutinize how organizations interpret different aspects of AI. The three most labeled themes are education and displacement of labor, transparency and explainability, and ethics. Similarly, the most prevalent sub-themes were general transparency, general mentions of discrimination and bias, and AI literacy.
As AI proliferates and its governance challenges grow, soft law will become an increasingly important part of this technology’s governance toolkit. An empirical understanding of the strengths and weaknesses of AI soft law will therefore be crucial for policymakers, technology companies, and civil society as they grapple with how to govern AI in a way that best harnesses its benefits, while managing its risks.
By creating the largest compilation of AI soft law programs, our aim is to provide a critical resource for policymakers in all sectors focused on responding to AI governance challenges. Its intent is to aid decision-makers in their pursuit of balancing the advantages and disadvantages of this tool and facilitate a deeper understanding of how and when they work best. To that end, we hope that the AI soft law database’s initial findings can suggest mechanisms for improving the effectiveness and credibility of AI soft law, or even catalyze the creation of new kinds of soft law altogether. After all, the future of AI governance – and by extension, AI soft law – is too important not to get right.
Carlos Ignacio Gutierrez is a governance of artificial intelligence fellow at Arizona State University. He completed his Ph.D. in Policy Analysis at the Pardee RAND Graduate School.
Gary Marchant is Regents’ Professor and Faculty Director of the Center for Law, Science & Innovation, Arizona State University.