It is no secret that developing artificial intelligence (AI) requires copious resources.1 Epoch AI estimates that if current trends persist, the largest training runs for frontier AI models could cost more than $1 billion by 2027.2 Despite limited resources, African countries are still finding ways to innovate and contribute to the global AI landscape.3 While African countries may not currently lead the charge in global AI development, they cannot ignore AI safety as they stand particularly vulnerable to AI-related harms, and especially as AI capabilities grow.4
Different instantiations of open-access AI have, therefore, been suggested as a practical means for global majority contexts to participate in AI safety.5 This essay defines open-access AI as any form of model-sharing, including staged releases, cloud-based access, API access, and models with widely available weights.6 Research contends that open-access AI fosters a more inclusive approach to defining acceptable model behavior.7 In particular, it advances safety research by allowing model scrutiny by external evaluators and modifications such as safety fine-tuning.8
Open-access AI is gaining traction on the continent. For instance, Sub-Saharan Africa’s open-source community is augmenting, with countries like Rwanda and Nigeria witnessing more than a 45% increase in developers between 2022 and 2023.9 However, the progress of an expanding developer community, which also consists of AI safety researchers, could potentially be hindered by the constraints of open-access AI in Africa. This essay examines the limitations of open-access AI as an approach to AI safety in Africa. It explains that AI safety research that leverages open-access AI may face obstacles due to the dependency dynamics between model-sharers and African safety developers, as well as systemic developmental challenges in Africa. Following this, it recommends how African safety researchers can strategically maximize open-access AI potential.
Dependency dynamics
Open Future claims that open approaches flourish when they are the result of external incentives as opposed to voluntary decision-making.10 Contrary to this, AI developers unilaterally control what AI components to share and who can gain access. Sienka Dounia, an African researcher specialising in AI safety and alignment, explained that one of the challenges in his work is scepticism when requesting access to models.11 Moreover, developers make the overall decision on whether to maintain proprietary control over their models. For instance, despite the fact that OpenAI was founded on principles of transparency, it has since retreated on its stance on openness.12 In 2024, the French company Mistral, a strong proponent of open-access AI in Europe, swiftly followed suit.13
More concerningly, leading AI countries are tending towards nationalistic AI policies to gain AI supremacy.14 Unfortunately, developing countries stand to be the biggest losers, as nationalist strategies alienate them from pertinent AI resources.15 In the same breath, given AI’s increasing role in national security, governments will likely become more cautious about sharing proprietary model information.16
The “Framework for Artificial Intelligence Diffusion” by the U.S. Commerce Department’s Bureau of Industry and Security (BIS) is a testament to this.17 The rule introduces a global licensing requirement for model weights of models trained using more than 1,026 FLOPs. In the interest of U.S. national security, the regulation restricts export, re-export or in-country transfer of model weights. It however creates an exception for 18 key U.S. allies, none of which are African countries, that are considered low-risk destinations.
While the new export controls only target advanced models, they signal a tightening grip on model-sharing as AI progresses. Further, African governments may face challenges in fulfilling the required licensing requirements, possibly limiting future access to U.S. model weights. For instance, Africa is presently ill-equipped to enforce safeguards against the risks associated with advanced AI, such as chemical, biological, radiological, and nuclear threats, which are an area of concern in the BIS rule.18
China’s extensive involvement in AI development in Africa also raises questions about Africa’s future license eligibility. The BIS regulation emphasises on supply chain independence from Tier 3 countries, which include China, for its AI partners.19 Yet, China is the leading exporter of AI technologies in Africa, with the majority of African countries benefiting from deals like the Belt and Road Initiative (BRI) agreements.20 Highly influential Chinese companies, such as Alibaba and Huawei, are expanding their presence on the continent with Huawei announcing in 2023 its plans to invest $430 million in AI infrastructure on the continent over the next five years.21 Some researchers are wary that such investments may allow China to assert influence and limit African collaboration with Western partners.22
Therefore, African AI safety developers’ reliance on open-access AI further entrenches power imbalances, leaving African developers at the will of model producers and the countries in which they are based.23 This not only compromises African countries’ autonomy, but also undermines developers’ ability to meaningfully contribute to AI safety research.
The boundaries of open-access AI in Africa
Open-access AI is generally credited with cutting development costs.24 DeepSeek-R1, for example, rivals frontier proprietary models at a fraction of the inference costs (approximately only 3%).25 Despite apparent progress, the resources required to utilize open-access AI are still not in an arm’s reach in African contexts.26 Access to critical AI infrastructure, such as graphics processing units (GPUs) and cloud computing services, remains a crucial challenge. A study conducted on Zindi Africa data scientists revealed that only 1% of them have ‘on-premises’ access to GPUs. Whereas 4% pay for cloud access to GPUs but can mostly only afford access worth $1,000 per month, which translates to approximately two hours of daily usage for an old Nvidia A100 GPU. The remaining 95% rely on standard laptops without GPUs or access GPUs via free cloud-based tools, which impose cumbersome usage restrictions.27 Although GPU access is a global conundrum, Africa’s situation is dire.28 For perspective, a report highlighted the disproportionately high cost of GPUs relative to GDP per capita in different African countries.29 For instance, the cost of an Nvidia A100 GPU was 22% of GDP per capita in South Africa, 75% in Kenya and 69% in Senegal.30
Furthermore, Africa’s access to foundational utilities for AI, such as energy and digital infrastructure, is still highly limited.31 Africa accounts for only 6% of global energy use.32 As of 2022, only 51.5% of the population in Sub-Saharan Africa had access to electricity.33 Moreover, Sub-Saharan African countries experience an annual average of 87 blackouts, in stark contrast to North America that averages at 1.34 Closely connected to this, internet access varies across Africa, with low-income countries like South Sudan, Burundi and the Central African Republic having penetration below 13%.35 However, optimistically, momentum for 5G is growing on the continent. Sub-Saharan Africa, for instance, is expected to have 226 million 5G connections in 2030, an adoption rate of 17%.36
Additionally, African developers risk exclusion from global safety research networks, where advancements in AI knowledge are collaboratively exchanged, due to factors such as financial constraints and stringent visa requirements.37 The global inattention to AI safety research exacerbates the problem. AI safety research constituted a meagre 2% of the broader AI research landscape between 2017 and 2022.38 In Africa, AI funding primarily appears to be directed at finding AI solutions for developmental challenges such as healthcare, agriculture and education.39 Continental and national policy suggests that this emphasis is likely to persist.40
In totality, African AI safety developers seeking to leverage open-access AI confront a multitude of deep-rooted resource constraints. Simultaneously, despite considerable AI investment being directed to the continent, the prospects for African AI safety developers remain dim due to the disinterest in AI safety within the region.
Framing AI safety issues
Framing theory asserts that issues can be construed from different perspectives, and as having implications on multiple values.41 Linking an issue to people’s values fosters a sense of ownership, particularly when people perceive potential stakes.42 In Africa, harnessing AI for inclusive development is highly valued, given its socio-economic state and history of subjugation by dominant economies.43 To address resource scarcity, African AI safety researchers could secure AI funding allocated for developmental challenges, by emphasizing the risks that AI safety concerns pose to these solutions. African developers could, as an example, make a case for collaboration with AI stakeholders offering education solutions by underscoring the detrimental effects that misinformation and disinformation capabilities could have on system performance, student learning experiences and ultimately, organizational reputation.44 This would necessitate further research on effective framing.
African AI safety research collaboration
In tandem, African developers should establish safety research networks. Networks like the European Network for AI Safety exemplify the power of collaboration.45 Similarly, in Africa, consortium-based collaboration has been fronted as a means to pool resources to build general AI capacity.46 Networks such as the Artificial Intelligence for Development (AI4D) program have been instrumental in supporting African AI researchers, innovators and policymakers by providing crucial funding, resources and collaborative opportunities to drive AI research.47 Such initiatives could serve as an avenue to coordinate safety efforts, for instance through distributed machine learning. This approach could allow African researchers to scale their algorithms for large datasets, share computational resources, save time and minimize redundancy.48 Moreover, the African Union (AU) could play a pivotal role in fostering collaboration on AI safety. Some researchers propose that the AU and its member states should establish for open computing access.49 Additionally, the AU could further support AI safety by promoting shared research initiatives across its member states.
Developing context-specific African AI safety
African safety researchers could also identify “unmet needs” in model evaluation and develop niche expertise around them to enhance chances of gaining model access. This way, African safety researchers could not only address gaps in model evaluation but also position themselves as key contributors to the global AI ecosystem, making it more likely for external stakeholders to provide model access in exchange for valuable insights and expertise. Case in point, Anthropic engages crowd workers to red team and adversarially probe their models.50 However, one of the shortcomings of this is that evaluations can be inconsistent due to the varying characteristics in human evaluators.51 Building on research networks, African researchers could establish organizations with standardized policies for model evaluations to mitigate this problem.
Moreover, African countries and developers possess distinct attributes that may make them particularly well-suited for specific tasks. For example, Africa’s rich cultural, linguistic and demographic diversity could make it ideally suited for conducting robustness testing. African safety researchers could therefore be instrumental in designing “stress tests” that simulate African scenarios for AI models, which would be useful in building more resilient models.52 Another opportunity arises in testing multilingual models. Despite the increase in multilingual models, testing is mostly done in English.53 Hamza Chaudhry warns that this poses the risk of AI utilizing non-English languages in its dangerous capabilities such as misinformation and disinformation.54 Therefore, it is necessary that model testing also happens in non-English languages. Establishing teams of African evaluators that reflect Africa’s linguistic diversity to evaluate multilingual models could alleviate this concern, and this could serve as another point of expertise that could incentivize model producers to share model access with African AI safety researchers.
Conclusion
While African AI safety researchers could take up small-scale interpersonal interventions to bypass open-access AI obstacles, open-access AI cannot entirely ensure that African countries meaningfully participate in AI safety governance. Comprehensive systemic interventions such as global AI benefit-sharing commitments are essential.55 African countries could also consider negotiating for model access in multilateral AI agreements with leading AI partners. Experts predict that AI will cause a seismic shift like no other historically revolutionary technology.56 Given the socioeconomic power AI yields, it is paramount that the power to govern it is fairly distributed.
-
Footnotes
- Stanford University Human-Centered Artificial Intelligence. (2024). Artificial Intelligence Index Report 2024 (pp. 63-65). Stanford University Human-Centered Artificial Intelligence. https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf.
- Cottier, B., Rahman, R., Fattorini, L., Maslej, N., & Owen, D. (2024, June 3). How Much Does it Cost to Train Frontier AI Models? Epoch AI. https://epoch.ai/blog/how-much-does-it-cost-to-train-frontier-ai-models.
- Adams, R. (2024, December 17). AI is bad news for the Global South. Foreign Policy. https://foreignpolicy.com/2024/12/17/ai-global-south-inequality/#cookie_message_anchor; Meta. (2024, September 11). How one Kenyan startup is working to solve local challenges with Llama. Meta AI Blog. https://ai.meta.com/blog/upeo-labs-llama/. Adebayo, B., Bhalla, N., & Harrisberg, K. (2024, June 18). From Swahili to Zulu, African techies develop AI language tools. CNBC Africa. https://www.cnbcafrica.com/2024/from-swahili-to-zulu-african-techies-develop-ai-language-tools/.
- Abungu, C., Iradukunda, M. V., Sayidali, R., Hassan, A., & Cass-Beggs, D. (2024, December 16). Why Global South Countries Need to Care About Highly Capable AI. CIGI Paper No. 311. The Centre for International Governance Innovation. https://www.cigionline.org/publications/why-global-south-countries-need-to-care-about-highly-capable-ai/; Chege, G. (2024, September 27). The AU Continental AI Strategy: Concrete safety proposals or high-tech hype? ILINA. https://ilinaprogram.org/2024/09/27/the-au-continental-ai-strategy-concrete-safety-proposals-or-high-tech-hype/.
- Bommasani, R., Kapoor, S., Klyman, K., Longpre, S., Ramaswami, A., Zhang, D., Schaake, M., Ho, D. E., Narayanan, A., & Liang, P. (2023, December 13). Considerations for Governing Open Foundation Models (pp. 4-5). Stanford University Human-Centered Artificial Intelligence. https://hai.stanford.edu/issue-brief-considerations-governing-open-foundation-models; Creative Commons, Eleuther AI, GitHub, Hugging Face, LAION, & Open Future. (2023, July 26). Supporting Open Source and Open Science in the EU AI Act (pp. 3-4). GitHub. https://github.blog/wp-content/uploads/2023/07/Supporting-Open-Source-and-Open-Science-in-the-EU-AI-Act.pdf; Anthony, A., Sharma, L., & Noor, E. (2024, April 30). Advancing a More Global Agenda for Trustworthy Artificial Intelligence. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2024/04/advancing-a-more-global-agenda-for-trustworthy-artificial-intelligence?lang=en;. Adan, S. N., Trager, R., Blomquist, K., Dennis, C., Edom, G., Velasco, L., Abungu, C., Garfinkel, B., Jacobs, J., Okolo, C. T., Wu, B., & Vipra, J. (2024, October). Voice and Access in AI: Global AI Majority Participation in Artificial Intelligence Development and Governance (pp. 35-37). Oxford Martin School AI Governance Initiative White Paper. https://www.oxfordmartin.ox.ac.uk/publications/voice-and-access-in-ai-global-ai-majority-participation-in-artificial-intelligence-development-and-governance.
- While the definition of open-access AI is highly contested, this essay adopts a broad definition to capture all varieties of “open models”. Further, the term ‘open-access AI’ is often used interchangeably with ‘open-source AI’. Gent, E. (2024, March 25). The Tech Industry Can’t Agree on What Open-source AI Means. That’s a Problem. MIT Technology Review. https://www.technologyreview.com/2024/03/25/1090111/tech-industry-open-source-ai-definition-problem/; Widder, D. G., West, S., & Whittaker, M. (2023, August 17). Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI. SSRN. https://ssrn.com/abstract=4543807; Solaiman, I. (2023, February 5). The Gradient of Generative AI Release: Methods and Considerations (pp. 4-6). arXiv. https://arxiv.org/pdf/2302.04844; Bommasani, R., Kapoor, S., Klyman, K., Longpre, S., Ramaswami, A., Zhang, D., Schaake, M., Ho, D. E., Narayanan, A., & Liang, P. (2023, December 13). Considerations for Governing Open Foundation Models (p. 3). Stanford University Human-Centered Artificial Intelligence. https://hai.stanford.edu/issue-brief-considerations-governing-open-foundation-models; International AI Safety Report: The International Scientific Report on the Safety of Advanced AI. (2025, January). AI Action Summit (pp. 150-152). https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf
- Kapoor, S., Bommasani, R., Klyman, K., Longpre, S., Ramaswami, A., Cihon, P., Hopkins, A., Bankston, K., Biderman, S., Bogen, M., Chowdhury, R., Engler, A., Henderson, P., Jernite, Y., Lazar, S., Maffulli, S., Nelson, A., Pineau, J., Skowron, A., Song, D., Storchan, V., Zhang, D., Ho, D. E., Liang, P., & Narayanan, A. (2024, February 27). On the Societal Impact of Open Foundation Models (pp. 3-5). arXiv. https://arxiv.org/abs/2403.07918; International AI Safety Report: The International Scientific Report on the Safety of Advanced AI. (2025, January). AI Action Summit (p. 20). https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf.
- Jain, S., Lubana, E. S., Oksuz, K., Joy, T., Torr, P. H. S., Sanyal, A., & Dokania, P. K. (2024, August 21). What Makes and Breaks Safety Fine-tuning? A Mechanistic Study (pp. 1-10). arXiv. https://arxiv.org/abs/2407.10264; Zong, Y., Bohdal, O., Yu, T., Yang, Y., & Hospedales, T. (2024). Safety fine-tuning at (almost) no cost: A baseline for vision large language models. In Proceedings of the 41st International Conference on Machine Learning. The 41st International Conference on Machine Learning, Vienna, Austria. https://doi.org/10.48550/arXiv.2402.02207; Google AI for Developers. (2024, October 23). Align your models. https://ai.google.dev/responsible/docs/alignment
- Westgarth, T., Garson, M., Crowley-Carbery, K., Otway, A., Bradley, J., & Mökander, J. (2024, November 18). State of compute access 2024: How to navigate the new power paradox (pp. 26–28). Tony Blair Institute for Global Change. https://institute.global/insights/tech-and-digitalisation/state-of-compute-access-2024-how-to-navigate-the-new-power-paradox.
- Keller, P., & Tarkowski, A. (n.d.). The paradox of open. Open Future. https://paradox.openfuture.eu/.
- This information was retrieved from an interview conducted by the author with Sienka Dounia on January 27, 2025. Dounia is a Research Associate at the Initiative for Longtermism in Africa Program (ILINA) and a former fellow at AI Futures and ILINA, specialising in technical AI alignment. LinkedIn: [https://www.linkedin.com/in/sienka-dounia/?originalSubdomain=ke].
- Jackson, S. (2024, November 2). Sam Altman explains OpenAI’s shift from open to closed AI models. Business Insider Africa. https://africa.businessinsider.com/news/sam-altman-explains-openais-shift-from-open-to-closed-ai-models/p4xjp36.
- Tarkowski, A. (2024). Open source and the democratization of AI. In Artificial intelligence and the challenge for global governance: Nine essays on achieving responsible AI (p. 31). Digital Society Initiative. https://www.chathamhouse.org/sites/default/files/2024-06/2024-06-07-ai-challenge-global-governance-krasodomski-et-al.pdf.
- Aaronson, S. A. (2024). The age of AI nationalism and its effects (CIGI Papers No. 306, p. 1). The Centre for International Governance Innovation (CIGI). https://www.cigionline.org/static/documents/no.306_updated.pdf.
- Aaronson, S. A. (2024). The age of AI nationalism and its effects (CIGI Papers No. 306, p. 17). The Centre for International Governance Innovation (CIGI). https://www.cigionline.org/static/documents/no.306_updated.pdf.
- Ryan, F., Iliadis, N., & Gor, G. (2024). Weaving a safety net: Key considerations for how the AI Safety Institute Network can advance multilateral collaboration (p. 7). The Future Society. https://thefuturesociety.org/wp-content/uploads/2024/11/Weaving-a-Safety-Net_AISI-Collaboration_November-2024.pdf.
- Bureau of Industry and Security, U.S. Department of Commerce. (2025, January 15). Framework for Artificial Intelligence Diffusion (Interim final rule). Federal Register. https://www.federalregister.gov/documents/2025/01/15/2025-00636/framework-for-artificial-intelligence-diffusion.
- Chege, G. (2024, September 27). The AU Continental AI Strategy: Concrete safety proposals or high-tech hype? ILINA. https://ilinaprogram.org/2024/09/27/the-au-continental-ai-strategy-concrete-safety-proposals-or-high-tech-hype/.
- Heim, L. (2025, January). Understanding the Artificial Intelligence Diffusion Framework: Can Export Controls Create a U.S.-Led Global Artificial Intelligence Ecosystem? (p. v). RAND. https://www.rand.org/pubs/perspectives/PEA3776-1.html.
- Adams, R. (2022, May 30). AI in Africa: Key concerns and policy considerations for the future of the continent (p. 7). Africa Policy Research Institute. https://afripoli.org/uploads/publications/AI_in_Africa.pdf. The agreement brought smart city infrastructure, 5G networks, surveillance cameras, cloud computing and e-commerce to many African cities.
- Irwin-Hunt, A. (2023, October 3). Huawei’s $430m northern Africa push. FDI Intelligence. https://www.fdiintelligence.com/content/news/huaweis-430m-northern-africa-push-83020.
- Lemma, A. (2024, November 28). Will China’s influence in Africa’s AI revolution undermine its sovereignty? ODI Global. https://odi.org/en/insights/opinion-will-chinas-influence-in-africas-ai-revolution-undermine-its-sovereignity/.
- Adan, S. N., Trager, R., Blomquist, K., Dennis, C., Edom, G., Velasco, L., Abungu, C., Garfinkel, B., Jacobs, J., Okolo, C. T., Wu, B., & Vipra, J. (2024, October). Voice and Access in AI: Global AI Majority Participation in Artificial Intelligence Development and Governance (pp. 35-37). Oxford Martin School AI Governance Initiative White Paper. https://www.oxfordmartin.ox.ac.uk/publications/voice-and-access-in-ai-global-ai-majority-participation-in-artificial-intelligence-development-and-governance
- For instance, customising open-weight models through techniques like fine-tuning requires less technical expertise, resources and computing power than training a model from scratch – US National Telecommunications and Information Administration. (2024, July). Dual-use Foundation Models with Widely Available Model Weights (pp. 8-9). https://www.ntia.gov/sites/default/files/publications/ntia-ai-open-model-report.pdf.
- Sharma, S. (2025, January 20). Open-source DeepSeek-R1 uses pure reinforcement learning to match OpenAI o1 — at 95% less cost. VentureBeat. https://venturebeat.com/ai/open-source-deepseek-r1-uses-pure-reinforcement-learning-to-match-openai-o1-at-95-less-cost/
- Seger, E., Dreksler, N., Moulange, R., Dardaman, E., Schuett, J., Wei, K., Winter, C., Arnold, M., Ó hÉigeartaigh, S., Korinek, A., Anderljung, M., Bucknall, B., Chan, A., Stafford, E., Koessler, L., Ovadya, A., Garfinkel, B., Bluemke, E., Aird, M., Levermore, P., Hazell, J., & Gupta, A. (2023). Open-sourcing highly capable foundation models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives (pp. 27–28). Center for the Governance of AI. https://cdn.governance.ai/Open-Sourcing_Highly_Capable_Foundation_Models_2023_GovAI.pdf.
- Tsado, A., & Lee, C. (2024, November 12). Only five percent of Africa’s AI talent has the compute power it needs. United Nations Development Programme. https://www.undp.org/digital/blog/only-five-percent-africas-ai-talent-has-compute-power-it-needs.
- Wodecki, B. (2023, November 15). Open-source vs closed models: The true cost of running AI. AI Business. https://aibusiness.com/nlp/open-source-vs-closed-models-the-true-cost-of-running-ai; Swedish International Development Cooperation Agency, International Development Research Centre, Artificial Intelligence for Development, & Genesis. (2024). AI in Africa: The state and needs of the ecosystem (pp. 11). Diagnostic and solution set for compute. https://idl-bnc-idrc.dspacedirect.org/items/87b716eb-b7b3-458c-ba69-ae96ed0bd4ae.
- Swedish International Development Cooperation Agency, International Development Research Centre, Artificial Intelligence for Development, & Genesis. (2024). AI in Africa: The state and needs of the ecosystem (pp. 11). Diagnostic and solution set for compute. https://idl-bnc-idrc.dspacedirect.org/items/87b716eb-b7b3-458c-ba69-ae96ed0bd4ae.
- Swedish International Development Cooperation Agency, International Development Research Centre, Artificial Intelligence for Development, & Genesis. (2024). AI in Africa: The state and needs of the ecosystem (pp. 11). Diagnostic and solution set for compute. https://idl-bnc-idrc.dspacedirect.org/items/87b716eb-b7b3-458c-ba69-ae96ed0bd4ae.
- Clynch, H. (2025, January 30). DeepSeek’s cheaper AI claims raise African hopes. African Business. https://african.business/2025/01/technology-information/deepseeks-cheaper-ai-claims-raise-african-hopes.
- International Energy Agency. Energy system of Africa. https://www.iea.org/regions/africa.
- World Bank Group. Access to electricity (% of population) – Sub-Saharan Africa. https://data.worldbank.org/indicator/EG.ELC.ACCS.ZS?locations=ZG.
- Westgarth, T., Garson, M., Crowley-Carbery, K., Otway, A., Bradley, J., & Mökander, J. (2024, November 18). State of compute access 2024: How to navigate the new power paradox (p. 4). Tony Blair Institute for Global Change. https://institute.global/insights/tech-and-digitalisation/state-of-compute-access-2024-how-to-navigate-the-new-power-paradox.
- Statista. (2024). Share of internet users in Africa as of January 2024, by country. Retrieved from https://www.statista.com/statistics/1124283/internet-penetration-in-africa-by-country/.
- GSMA. (2024, July). AI for Africa: Use cases delivering impact (p. 24). https://www.gsma.com/solutions-and-impact/connectivity-for-good/mobile-for-development/wp-content/uploads/2024/07/AI_for_Africa.pdf.
- Hao, K. (2019, June 21). The future of AI research is in Africa. MIT Technology Review. Retrieved from https://shorturl.at/C1Tw2.
- Emerging Technology Observatory. (2024, April 3). The state of global AI safety research. https://eto.tech/blog/state-of-global-ai-safety-research/.
- Strathmore University Centre for Intellectual Property and Information Technology Law. (2023). The state of AI in Africa report 2023 (pp. 16–19). https://cipit.strathmore.edu/state-of-artificial-intelligence-in-africa-2023-report/.
- Chege, G. (2024, September 27). The AU Continental AI Strategy: Concrete safety proposals or high-tech hype? ILINA. https://ilinaprogram.org/2024/09/27/the-au-continental-ai-strategy-concrete-safety-proposals-or-high-tech-hype/.
- Chong, D., & Druckman, J. N. (2007). Framing theory. Annual Review of Political Science, 10(1), 104. https://doi.org/10.1146/annurev.polisci.10.072805.103054.
- Gay & Lesbian Alliance Against Defamation (GLAAD), & Movement Advancement Project (MAP). (2008, January). The art and science of framing an issue (p. 5). https://www.lgbtmap.org/effective-messaging/art-and-science-of-framing-an-issue.
- African Union. (2024, August 9). Continental Artificial Intelligence Strategy.
- Moyer, M. W. (2022, February 1). Schoolkids are falling victim to disinformation and conspiracy fantasies. Scientific American. https://www.scientificamerican.com/article/schoolkids-are-falling-victim-to-disinformation-and-conspiracy-fantasies/.
- European Network for AI Safety. Home. European Network for AI Safety. https://www.enais.co/.
- GSMA. (2024, July). AI for Africa: Use cases delivering impact (p. 70). https://www.gsma.com/solutions-and-impact/connectivity-for-good/mobile-for-development/wp-content/uploads/2024/07/AI_for_Africa.pdf.
- Artificial Intelligence for Development (AI4D) programme. About. https://www.ai4d.ai/about.
- Dehghani, M., & Yazdanparast, Z. (2023, October 13). From distributed machine to distributed deep learning: A comprehensive survey. Journal of Big Data, 10, 158, 2. https://doi.org/10.1186/s40537-023-00769-4.
- Bayuo, B., & Mwaya, J. (2024, May 1). The Missing Piece in Africa’s AI Blueprint: The Computing Conundrum. African Center for Economic Transformation. https://acetforafrica.org/research-and-analysis/insights-ideas/articles/the-missing-piece-in-africas-ai-blueprint-the-computing-conundrum/.
- Anthropic. (2023, October 4). Challenges in evaluating AI systems. Anthropic. https://www.anthropic.com/research/evaluating-ai-systems.
- Anthropic. (2023, October 4). Challenges in evaluating AI systems. Anthropic. https://www.anthropic.com/research/evaluating-ai-systems.
- Podar, H., Baffour, J. A., Ajibona, O., Klaits, A., Alaiashy, O., Kailash, K. S., Sampath, S., Uddin, P., Haber, E., Soataliyeva, S., & Gitobu, C. (2025). Bridging the International AI Governance Divide: Key Strategies for Including the Global South (p. 20). Encode Justice Report. https://encodeai.org/wp-content/uploads/2025/01/Encode-Justice-Report_Safety-Summit_Global-North.pdf.
- Chaudhry, H. (2024, July 24). AI testing mostly uses English right now. That’s risky. Time. https://time.com/7001812/ai-testing-english-language-risks-essay/.
- Chaudhry, H. (2024, July 24). AI testing mostly uses English right now. That’s risky. Time. https://time.com/7001812/ai-testing-english-language-risks-essay/.
- Dennis, C., Manning, S., Clare, S., Wu, B., Effoduh, J. O., Okolo, C. T., Heim, L., & Garfinkel, B. (n.d.).Ways forward for global AI benefit sharing (p. 3). Open Review. https://openreview.net/pdf?id=St6azqVuqs; This could mean sharing AI development resources, model components and access, scientific discoveries, AI end products and the financial proceedings from AI – Heim, L. (2024, September 28). AI benefit sharing options. XYZ. https://shorturl.at/6vy5N.
- Bremmer, I., & Suleyman, M. (2023, August 16). The AI power paradox. Foreign Affairs. https://shorturl.at/anm9r.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Is open-access AI the great safety equalizer for African countries?
February 21, 2025