The Caribbean environment is shaped by post-colonial slavery and indentureship, giving rise to transplanted populations that experienced cultural fragmentation, particularly through loss of language.1 Yet, the cultural heritage that endures is expressed in distinct traditions, languages, and art forms. Unfortunately, contemporary AI systems fail to accommodate these linguistic and cultural nuances, increasing the risk of misrepresentation or harmful misinterpretations in Caribbean contexts, potentially leading to classification errors and compromised safety mechanisms.2
At the same time, countries and organizations attending global AI safety summits often create localized definitions and standards of “safety,” heavily influenced by wealthier nations with significant computational and data resources. Within these global discussions, the European Union’s AI Act3 emphasizes a risk-based regulatory model to protect fundamental rights and data privacy, while the EU Ethics Guidelines for Trustworthy AI4 prioritize human oversight and robust, transparent design. The U.S. Algorithmic Accountability Act 20235 focuses on transparency and bias mitigation through mandated impact assessments, whereas the National Artificial Intelligence Initiative Act6 aims to balance AI research with ethical standards and collaborative policies. In Australia, the Artificial Intelligence Ethics Framework7 highlights fairness, accountability, and inclusivity, whereas voluntary frameworks like the AI Risk Management Framework by NIST,8 the IEEE Ethically Aligned Design guidelines,9 and the AI4People10 initiative each address distinct facets of AI safety and ethics. While these measures bring focus to issues of transparency, accountability, and human rights, they frequently overlook the realities of smaller nations such as those in the Caribbean, whose cultural and linguistic particularities demand more context-specific approaches.
Evolving linguistic and cultural erasure
At the heart of these challenges lies the lack of representative datasets, particularly for low-resource Caribbean Creole languages, whose exclusion from mainstream AI development perpetuates linguistic and cultural erasure. As these languages often lack formal documentation, standardized orthographies, or large-scale digitized corpora, collecting sufficient and high-quality data is logistically difficult and resource intensive. In many instances, speakers themselves may be dispersed among multiple islands with varying dialectal influences, further complicating data collection and curation efforts. Additionally, historically limited investments in linguistic research for Creole languages mean that existing datasets are frequently piecemeal.11
In many instances, control over and ethical use of locally sourced data do not rest with Caribbean institutions but rather lie in the hands of external multinational corporations, research labs, or governments with limited accountability to local populations. This power imbalance raises concerns about how data is collected, stored, and utilized, deepening mistrust in AI technologies. These processes also exacerbate the fears of exploitation and the erosion of cultural identity through unregulated data extraction and minimal community consultation. Personal information, cultural expressions, and even public domain content can be repackaged into commercial products without delivering tangible benefits to the communities that produced the data. Moreover, funnelling Caribbean data predominantly into external innovation pipelines relegates the region to a passive role as a raw data provider, with decision-making power often residing abroad. As a result, design priorities may overlook local realities, potentially perpetuating biases or exacerbating existing socioeconomic disparities, while Caribbean nations become increasingly dependent on foreign AI systems (“vendor lock-in”) that do not adequately reflect or advance local priorities and values.
Economic barriers to sustaining AI Safety
The same structural asymmetries affect the resource landscape, compounding the economic vulnerabilities faced by Caribbean nations and leaving them more susceptible to exogenous shocks. The region’s economy, heavily reliant on sectors such as tourism and business process outsourcing (BPO), faces significant risks from automation.12 AI-powered chatbots and virtual assistants, for instance, could replace human customer service agents in call centers, leading to job losses and economic instability in a critical employment sector. Simultaneously, the infrastructural challenges faced by many SIDS, including unstable internet connectivity, occasional power outages, and heightened climate risks, further disadvantage the region in global AI development.
The Caribbean must now look inward to strengthen its own foundations and reduce reliance on external frameworks. Investments in upskilling, research, and talent creation are necessary for building a workforce capable of driving AI innovation and aligning it with local priorities. However, in the absence of adequate mechanisms such as competitive opportunities, robust research infrastructure, and favorable policy environments, talented professionals may migrate to regions offering better incentives and resources. Current capacity development initiatives often overlook this fact, where this outflow of expertise undermines the Caribbean’s ability to develop and sustain its own AI ecosystems, resulting in a net loss for the region.
Against this backdrop of capacity-building challenges and the desire for local autonomy, pursuing sovereign AI emerges as a complex yet potentially transformative pathway. However, if the Caribbean opts to develop sovereign AI systems to mitigate these risks and align with global standards, the undertaking could compound existing vulnerabilities not only through the significant resource demands inherent in running high-performance computing (HPC) infrastructure but also by navigating geopolitical hurdles to obtaining cutting-edge hardware. Advanced tensor processors, necessary for state-of-the-art AI applications, are often subject to global shortages and export controls.13
Caribbean institutions may struggle to acquire and maintain the HPC infrastructure necessary for competitive AI development. In a region already grappling with the cyclical devastation of hurricanes, earthquakes, and, occasionally, volcanic eruptions, diverting substantial energy and water resources to power and cool these systems risks straining critical sectors like agriculture, health care, and disaster preparedness. This dynamic underscores a precarious situation: While the Caribbean urgently needs robust, locally controlled AI systems to avert dependence on foreign entities and meet international benchmarks, it must do so within a framework of limited hardware availability, resource constraints, climate resilience needs, and infrastructural realities.
AI safety goals must be better aligned
AI safety, at its core, involves aligning the development and deployment of intelligent systems with societal well-being, ethical principles, and the protection of fundamental rights. This goes beyond preventing technical errors or malicious misuse; it also calls for ensuring that the benefits of AI do not exacerbate existing inequalities or undermine cultural values. In many parts of the world, smaller and historically under-resourced communities grapple with issues of data sovereignty, linguistic representation, infrastructure limitations, and environmental vulnerabilities. Consequently, a robust, accepted definition of AI safety must account for local realities, so that AI initiatives neither marginalize specific populations nor compromise the sovereignty of communities seeking to retain control over their cultural artifacts and linguistic heritage.
To make global AI safety frameworks truly inclusive, they should encourage the research and adoption of capable low-compute AI models and open-source tensor processing hardware architectures, ensuring that regions with limited energy or connectivity resources can pursue AI development without taxing already constrained infrastructures. These approaches would help to mitigate environmental risks, such as high energy consumption and cooling needs, promote local data stewardship, and reduce vendor lock-in.
An often-overlooked dimension of AI safety is the economic resilience of smaller regions. Global AI safety governance should incorporate adaptive frameworks to address potential job displacement and other disruptions triggered by automated systems especially for these regions. Long-term capacity development and initiatives aimed at reversing “brain drain” should be seen as core components of AI safety. Maintaining cultural identity through ethical data collection of underrepresented languages or by integrating local contexts into system design can establish trust but also demands transparent consultation and ongoing engagement so that citizens can clearly see how AI technologies emerge and whom they truly serve. Access to sustained funding pools can enable robust research, resource sharing, and infrastructure improvements that are not contingent on external actors’ agendas. Identifying niche industries that can benefit from AI while also offering avenues for foreign exchange helps local communities move beyond dependence on a single sector. Aligning these endeavours with broader sustainability goals ensures that AI adoption does not strain scarce resources, exacerbate climate-related risks, or undermine other critical areas like health care and agriculture.
Lastly, if AI safety frameworks are to become genuinely global rather than one-size-fits-all directives, multilateral agreements should institutionalize the inclusion of historically underrepresented regions at the table. Ultimately, reframing AI safety in this way safeguards cultural heritage, supports socioeconomic uplift, and champions a more just and inclusive technological future for all.
-
Footnotes
- Dash, J. “Postcolonial Caribbean Identities.” Chapter. In The Cambridge History of African and Caribbean Literature, edited by F. Abiola Irele and Simon Gikandi, 785–96. Cambridge: Cambridge University Press, 2000.
- Prabhakaran, Vinodkumar, Rida Qadri, and Ben Hutchinson. “Cultural incongruencies in artificial intelligence.” arXiv preprint arXiv:2211.13069 (2022).
- European Parliament, “Artificial Intelligence Act,” Regulation (EU) 2024/1689, Official Journal of the European Union, July 12, 2024.
- European Commission. “Ethics Guidelines for Trustworthy AI.” Future of Work, Brussels: European Commission, 2019.
- U.S. Congress. House. Algorithmic Accountability Act of 2023. 118th Cong., 1st sess., H.R. 5628. Introduced in House September 21, 2023.
- U.S. Congress. 2020. National Artificial Intelligence Initiative Act of 2020. Public Law No. 116-283, Division E, §§ 5001-5501, January 1, 2021. https://www.congress.gov/bill/116th-congress/house-bill/6395/text.
- Department of Industry, Science and Resources. “Australia’s Artificial Intelligence Ethics Principles.” Australian Government, November 7, 2019.
- National Institute of Standards and Technology (NIST). 2021. Artificial Intelligence Risk Management Framework. Accessed January 20, 2025. https://www.nist.gov/ai.
- IEEE. 2019. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Artificial Intelligence and Autonomous Systems. 2nd ed. New York: The Institute of Electrical and Electronics Engineers. https://ethicsinaction.ieee.org.
- AI4People. 2018. Ethics Guidelines for Trustworthy AI. AI4People Initiative. https://www.eismd.eu/ethics-guidelines.
- Lent, Heather, Kushal Tatariya, Raj Dabre, Yiyi Chen, Marcell Fekete, Esther Ploeger, Li Zhou, et al. 2023. “CreoleVal: Multilingual Multitask Benchmarks for Creoles.” ArXiv.org. 2023. https://arxiv.org/abs/2310.19567.
- Inter-American Development Bank. 2023. “Can Latin America and the Caribbean Unlock AI’s Potential While Mitigating Its Perils?” Ideas Matter Blog, May 5, 2023. https://blogs.iadb.org/ideas-matter/en/can-latin-america-and-the-caribbean-unlock-ais-potential-while-mitigating-its-perils.
- “Framework for Artificial Intelligence Diffusion.” 2025. Federal Register. January 15, 2025. https://www.federalregister.gov/documents/2025/01/15/2025-00636/framework-for-artificial-intelligence-diffusion.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Integrating Caribbean realities into global AI safety policies
February 21, 2025