

9:30 am EST - 11:00 am EST
Past Event
Artificial intelligence (AI) safety is an emerging field dedicated to ensuring that AI systems operate reliably, ethically, and beneficially. However, mainstream narratives surrounding AI safety often reflect the objectives and perspectives of Western institutions, prioritizing technical risks like alignment and misuse while neglecting broader societal and contextual harms. As global AI safety efforts gain momentum—through initiatives like the AI Safety Summit, specialized institutes, and new benchmarks—critical gaps remain. Many current evaluation methods lack coverage of non-Western languages, cultural norms, and societal contexts, leading to AI systems that perform poorly in diverse environments and reinforce systemic biases. These limitations highlight the urgent need for globalized approaches to AI safety.
On February 19, the Center for Technology Innovation at Brookings hosted a webinar featuring a panel of experts from across the globe to examine how Western-centric assumptions in AI safety frameworks can perpetuate inequities and bias. Panelists explored region-specific challenges, such as linguistic and cultural barriers as well as innovative frameworks, technical measures, and human-centered approaches to redefine what it means for AI to be “safe” on a global scale.
Read more from the series “AI Safety and the Global Majority.“
Viewers submitted questions for speakers by emailing [email protected] and via Twitter at @BrookingsGov by using #AISafety.
Moderator
Related Content
Shaun Ee, Jam Kraprayoon
February 14, 2025
Maia Levy Daniel
February 13, 2025
Chinasa T. Okolo
February 12, 2025
Joanna Wiaterek, Jared Perlo, Sumaya Nur Adan
September 22, 2025
Bitange Ndemo, Marine Ragnet
September 20, 2025
Hady Amr, Belinda Archibong, Norman Eisen, Marcela Escobari, Vanda Felbab-Brown, Jeffrey Feltman, Jonathan Katz, Cameron F. Kerry, Emily Markovich Morris, Modupe (Mo) Olateju, Ghulam Omar Qargha, Zia Qureshi, Sophie Rutenbar, Sweta Shah, Landry Signé, Shibley Telhami, David G. Victor
September 19, 2025