Today’s major technology firms wield huge social and political influence across the world to the point that their actions, and the content they host, is often seen as a direct challenge to national sovereignty and the norms and power structures that support states. In regions and countries as varied as Europe, China, the United States, Australia and Russia, governments are proposing and—in the case of China, Russia, and Australia—implementing regulations that purport to protect the national interest by imposing new duties on the largest online players. Their effectiveness in economic or social terms is uncertain, and their impact on rights has been heavily criticized by civil society.
Industrial revolutions of any kind have real and severe implications for economic development, national security, social cohesion, and human rights, and the one we are experiencing now—dubbed the “the Fourth Industrial Revolution” by the World Economic Forum—poses even greater risks along these lines, given the speed and scale at which digital applications and systems can be deployed across traditional borders. The dominance of digital firms, and the uses to which their services are being put, is also creating risks that range from fomenting extremism in Syria, to shifting democratic participation in Kenya, inciting genocidal violence in Myanmar, and the global spread of misinformation around health.
As we point out in our recent report, “Interoperable, agile and balanced: rethinking technology policy and governance for the 21st Century,” structural shifts in the sources and wielders of economic, social and political power—and the urgent threats that accompany such changes—suggests a need for new forms of regulation and governance that ensure common social values survive and thrive. Most importantly, values such as fairness, inclusiveness, and accountability need to be consciously and carefully built into both our governance systems and the technologies themselves to ensure that their direct risks to users and negative externalities to others are well managed.
In the past, it has taken governments decades after the popular adoption of new technologies to appreciate associated externalities and develop and enforce appropriate policies to mitigate them. The automobile was almost 100 years old and responsible for killing over 50,000 Americans a year before road safety was taken seriously at the federal level in the United States in 1970. Yet policymakers today face bigger challenges than their predecessors, as they seek to manage the impacts of complex, rapidly evolving technologies that tend to be developed and jealously guarded by entrepreneurs. As former U.S. Secretary of State Madeline Albright puts it, “citizens are speaking to their governments using 21st century technologies, governments are listening on 20th century technology and providing 19th century solutions.”
This generational gap is hard to overcome because three critical challenges stymie those seeking to make policy today. A plethora of jurisdictions and approaches has led to regulatory fragmentation among cities, regions, and countries that dramatically reduces the utility of action while creating burden for would-be competitors of digital platforms. The data required to fully understand the extent of social problems is controlled by the very firms suspected of creating them, leaving policymakers with a lack of data to inform their actions. And the reliance of countries and their citizens on the services of the same technology firms that they would like to influence has led to fewer degrees of freedom.
What, then, does 21st century tech policy and governance look like? Essentially, policymakers need to be equipped with a new set of tools that help address these challenges.
First, we need more transparent and holistic policymaking approaches that clearly communicate technology policy goals and identify trade-offs at the national, international, and subnational levels as well as across stakeholder groups. The pervasiveness of digital systems in our lives means that technology policy is rapidly becoming “everything policy,” with critical and different implications for areas as diverse as infrastructure resilience, national security, the competitiveness of markets, social cohesion, the relationship between citizens and the state, and even—as we have seen recently—how well health systems function. Faced with this, policymakers need reasoned, structured approaches that avoid the twin traps of hasty, opportunistic policymaking that only addresses symptoms at one end of the spectrum and paralyzed policymaking that never approaches implementation at the other extreme.
Given its reasonable timeframes for consultation and its interaction with a broad set of stakeholder groups, the European Commission’s considered, comprehensive approach to the construction of the Digital Services Act (DSA) is a step in the right direction. But more work needs to be done in identifying and resolving critical conflicts and trade-offs that are emerging in proposals and amendments. For example, the EU’s General Data Protection Regulation asserts the right not to be subject to automated decisionmaking, yet current DSA proposals around the removal of objectionable content will create strong incentives for digital platforms to continuously monitor and assess material in ways that will very likely infringe on freedom speech.
Second, even though working with others is hard, effective technology policy requires close collaboration across jurisdictions. Countries need to be systematically gathering and sharing the evidence of effectiveness or failure of diverse technology policy approaches across jurisdictions. To overcome the challenge of a lack of evidence, countries may need to support new processes around sharing insights into the algorithms and datasets of structurally important digital firms. Investing heavily in open, international technology standards focused on current issues will pay capitals back many times over.
Here, it is encouraging to see policy networks such as the World Economic Forum’s network of Centres for the Fourth Industrial Revolution creating spaces for policymakers to collaborate on pilots across jurisdictional boundaries, share frameworks and data, and draw on examples of success and failure. Such networks have also contributed to the development of informal frameworks and standards, such as the guidelines for AI procurement by public authorities.
Finally, managing the impact of technologies produced with agile development processes requires a shift toward agile governance. We believe that a wider representation of stakeholder interests, combined with a congenial dance between exploration and evidence-based decisions, can lead to more proactive and entrepreneurial governance fit for the 21st Century.
Agile governance processes have been emerging for a while in forward-leaning government departments, creating spaces for policy experimentation and learning. For example, regulatory sandboxes have been used to test rules around drones and innovative financial services. Meanwhile, risk-based regulatory approaches—which allow for considerable nuance in the application of laws as well as encouraging contextually sensitive assessments—are at the heart of the EU’s recent proposed regulation laying down harmonized rules on artificial intelligence.
All of this represents an opportunity, rather than a burden. The opportunity is to reform governance in a way that enables us to embed fairness, inclusion, and accountability within the technological systems that increasingly shape our economies and societies. Investing and succeeding in this area could mean that the coming decade of policy governance integrates diverse values in interoperable systems, with regulators and citizens working alongside one another as reciprocal partners, rather than antagonists.
Nicholas Davis is a professor of practice at the Thunderbird School of Global Management and the former head of society and innovation at the World Economic Forum.
Landry Signé is a senior fellow in the Global Economy and Development Program and the Africa Growth Initiative at the Brookings Institution, and professor and managing director at the Thunderbird School of Global Management, and a distinguished fellow at Stanford University.
Mark Esposito is a clinical professor at the Thunderbird School of Global Management and a policy fellow at the UCL Institute for Innovation and Public Purpose.
Commentary
Rethinking technology policy and governance for the 21st Century
January 11, 2022