Sections

Commentary

The future of the world is intelligent: Insights from the World Economic Forum’s AI Governance Summit

December 8, 2023


  • Last month, the World Economic Forum convened over 200 hundred world leaders, technology experts, and academics for the AI Governance Summit.
  • The Summit raised a multiple important issues, including the challenge of coordinating on AI policy given the fast pace of technological development and the need to balance the benefits and risks of generative AI.
  • Discussions at the Summit also emphasized that prioritizing responsible AI deployment is an imperative for corporations and that national AI strategies will play an important role in balancing the risks and benefits of this technology.
A sign is pictured at the Congress Center ahead of the World Economic Forum (WEF) annual meeting in Davos, Switzerland January 20, 2020. REUTERS/Denis Balibouse
A sign is pictured at the Congress Center ahead of the World Economic Forum (WEF) annual meeting in Davos, Switzerland January 20, 2020. REUTERS/Denis Balibouse

On November 16, the World Economic Forum’s AI Governance Summit convened over 200 global leaders, tech experts, academics, innovators, and policymakers to address the evolving landscape of artificial intelligence (AI) governance and shape its responsible future. The Summit offered a unique platform for insightful discussions at the forefront of ethical AI governance, and participants have engaged in the development of strategies, multistakeholder collaboration, and specific commitments for a safe, inclusive, responsible, and more “humane” AI. Anecdotally, the Summit couldn’t have been more timely given the new developments in the AI space, with a governance crisis resulting in OpenAI firing and immediately reinstating the CEO. As we distill the key takeaways from the Summit, several central themes emerge, offering a roadmap for responsible AI development.

Five takeaways

My five key takeaways from the AI Governance Summit are as follows:

  1. Embracing transformation: The dual pacing and coordination dilemmas

The pace of technological change emerged as a central challenge during discussions. With technology evolving at an unprecedented speed, the imperative is not only to keep up with it, but to also leverage these advancements for the benefit of humanity. Ensuring safety, trust, and inclusion became a non-negotiable call to action at the Summit, prompting a call for multistakeholder cooperation. The consensus was that, as we navigate the swiftly changing tech landscape, we must engage deliberately, recognize differences, and foster trust through ethical design principles. Trust and ethics, positioned as design elements, should be incorporated from the outset to create inclusive solutions. In a Brookings report I co-published with Steve Almond, A blueprint for technology governance in the post-pandemic world, we proposed six steps to address the pacing and coordination challenges, including anticipating innovation and its implications, focusing regulation on outcomes, creating the space to experiment, using data to target interventions, leveraging the role of business, working across institutional boundaries, and collaborating internationally.

  1. Generative AI governance: Balancing benefits and risks

The second major theme of the Summit focused on the governance challenges posed by generative AI. Instead of fixating on challenges, there was a call to shift the narrative towards highlighting its benefits. The discussions emphasized the need to make generative AI safe for humanity by connecting innovators, government leaders, and funding mechanisms to find solutions. The approach advocated was one of specificity in AI governance—addressing specific users, places, and challenges rather than adopting a broad macro-governance model. The risks to democracy, ranging from economic disparities to citizen representation and information dissemination, underscored the critical need for governance at multiple levels: developers, deployers, legislators, governments, and civil society; and the imperative to establish and evaluate SMART goals, building on lessons learned. Considering both the benefits and risks allow for better trade-offs and more optimal collective outcomes, especially for developing economies, as I highlighted in the Brookings event on Why the Global South has a stake in dialogues on AI governance: “When we speak about AI and AI governance—especially with regard to the Global South—most conversations are about potential harms, but not enough about the ability to unlock economic development and address a variety of challenges.”

  1. Navigating the frontier: Regulating application, not just tech

In the session focused again on generative AI, a critical discussion emerged about regulating the application of AI, rather than the technology itself. Analogizing AI to electricity, the conversations urged a paradigm shift in thinking about the positive contributions of increased intelligence. The emphasis on regulating applications sought to address the challenges posed by unforeseen risks in biosecurity, cyber threats, and the unpredictability of AI’s various applications. The call was to focus on ‘good data’ rather than sheer volume, introducing adaptive computing, and prioritizing human value convergences. It also further illustrated the imperative of transparency and trade-offs, of collaboration across jurisdiction and interoperability, and of agile governance, including a design approach, as discussed in the Brookings report Interoperable, agile, and balanced Rethinking technology policy and governance for the 21st century, I co-published with Nicholas Davis and Mark Esposito.

  1. Responsible AI deployment: A corporate imperative

The fourth session emphasized defining and deploying responsible AI at the corporate level. Participants highlighted the need for organizations to establish their AI academies, placing significant emphasis on ethics and compliance. Architecting responsible AI was framed around three critical factors: policy, process, and products. Robust policies to assess boundaries, avoidance of discriminatory practices, and clear standards were advocated. Trust layers at platforms, toxicity assessments, and zero data retention were presented as pivotal components. Engaging with upcoming regulations, anticipating innovation, and aligning interventions with values also emerged as key strategies.

  1. National strategies for inclusive AI ecosystems

The final session explored the role of national strategies in shaping the future of AI. Using AI to create a wealthier and happier society was seen as achievable through appropriate regulation to ensure inclusivity. Risk mitigations, a more agile approach, adherence to international standards, and a multistakeholder engagement model were underscored. Global mindsets, nuanced conversations, and the need to recognize opportunities, particularly in emerging economies, were central to the discussions. I shared some insights from my most recent book on Africa’s Fourth Industrial Revolution, especially strategies to maximize the benefits while reducing the risks associated with disruptive technological innovation. The importance of inclusive dialogues, diversity, and sector-specific risk approaches were highlighted, recognizing the varied perspectives and concerns at play.

All in all, the World Economic Forum’s AI Governance Summit provided a rich tapestry of insights, weaving together the challenges and opportunities in the realm of AI governance. The consensus among experts was clear: Responsible AI development requires a concerted effort, involving diverse stakeholders, ethical considerations, and a commitment to navigating the complex landscape with transparency and inclusivity. As we move forward, the Summit’s insights serve as a compass, guiding us toward a future where AI is not only technologically advanced, but also a force for positive transformation, benefiting society at large.