Sections

Commentary

Is open-access AI the great safety equalizer for African countries?

Grace Chege
Headshot of Grace Chege. She is wearing a black shirt and a silver necklace. She is smiling.
Grace Chege Junior Research Scholar - ILINA Program

February 21, 2025


  • Given disparities in access to resources needed to develop AI, open-access AI has been proposed as a mechanism to accelerate African efforts in AI safety.
  • However, benefits from open-access AI may be inaccessible to African developers given the interest by higher-income countries in controlling access to models and other computing resources.
  • To maximize the opportunities from open-access AI and improve AI safety for African contexts, African developers should increase international collaborations, strengthen local AI safety networks, and create standardized model evaluation practices.
A worker manufactures Open G smartphones, which can speak local Ivorian languages, at a factory in Grand Bassam, Ivory Coast.
A worker manufactures Open G smartphones, which can speak local Ivorian languages, at a factory in Grand Bassam, Ivory Coast July 22, 2022. REUTERS/Luc Gnago

It is no secret that developing artificial intelligence (AI) requires copious resources. Epoch AI estimates that if current trends persist, the largest training runs for frontier AI models could cost more than $1 billion by 2027. Despite limited resources, African countries are still finding ways to innovate and contribute to the global AI landscape. While African countries may not currently lead the charge in global AI development, they cannot ignore AI safety as they stand particularly vulnerable to AI-related harms, and especially as AI capabilities grow.

Different instantiations of open-access AI have, therefore, been suggested as a practical means for global majority contexts to participate in AI safety. This essay defines open-access AI as any form of model-sharing, including staged releases, cloud-based access, API access, and models with widely available weights. Research contends that open-access AI fosters a more inclusive approach to defining acceptable model behavior. In particular, it advances safety research by allowing model scrutiny by external evaluators and modifications such as safety fine-tuning.

Open-access AI is gaining traction on the continent. For instance, Sub-Saharan Africa’s open-source community is augmenting, with countries like Rwanda and Nigeria witnessing more than a 45% increase in developers between 2022 and 2023. However, the progress of an expanding developer community, which also consists of AI safety researchers, could potentially be hindered by the constraints of open-access AI in Africa. This essay examines the limitations of open-access AI as an approach to AI safety in Africa. It explains that AI safety research that leverages open-access AI may face obstacles due to the dependency dynamics between model-sharers and African safety developers, as well as systemic developmental challenges in Africa. Following this, it recommends how African safety researchers can strategically maximize open-access AI potential.

Dependency dynamics 

Open Future claims that open approaches flourish when they are the result of external incentives as opposed to voluntary decision-making. Contrary to this, AI developers unilaterally control what AI components to share and who can gain access. Sienka Dounia, an African researcher specialising in AI safety and alignment, explained that one of the challenges in his work is scepticism when requesting access to models. Moreover, developers make the overall decision on whether to maintain proprietary control over their models. For instance, despite the fact that OpenAI was founded on principles of transparency, it has since retreated on its stance on openness. In 2024, the French company Mistral, a strong proponent of open-access AI in Europe, swiftly followed suit.

More concerningly, leading AI countries are tending towards nationalistic AI policies to gain AI supremacy. Unfortunately, developing countries stand to be the biggest losers, as nationalist strategies alienate them from pertinent AI resources. In the same breath, given AI’s increasing role in national security, governments will likely become more cautious about sharing proprietary model information.

The “Framework for Artificial Intelligence Diffusion” by the U.S. Commerce Department’s Bureau of Industry and Security (BIS) is a testament to this. The rule introduces a global licensing requirement for model weights of models trained using more than 1,026 FLOPs. In the interest of U.S. national security, the regulation restricts export, re-export or in-country transfer of model weights. It however creates an exception for 18 key U.S. allies, none of which are African countries, that are considered low-risk destinations.

While the new export controls only target advanced models, they signal a tightening grip on model-sharing as AI progresses. Further, African governments may face challenges in fulfilling the required licensing requirements, possibly limiting future access to U.S. model weights. For instance, Africa is presently ill-equipped to enforce safeguards against the risks associated with advanced AI, such as chemical, biological, radiological, and nuclear threats, which are an area of concern in the BIS rule.

China’s extensive involvement in AI development in Africa also raises questions about Africa’s future license eligibility. The BIS regulation emphasises on supply chain independence from Tier 3 countries, which include China, for its AI partners. Yet, China is the leading exporter of AI technologies in Africa, with the majority of African countries benefiting from deals like the Belt and Road Initiative (BRI) agreements. Highly influential Chinese companies, such as Alibaba and Huawei, are expanding their presence on the continent with Huawei announcing in 2023 its plans to invest $430 million in AI infrastructure on the continent over the next five years. Some researchers are wary that such investments may allow China to assert influence and limit African collaboration with Western partners.

Therefore, African AI safety developers’ reliance on open-access AI further entrenches power imbalances, leaving African developers at the will of model producers and the countries in which they are based. This not only compromises African countries’ autonomy, but also undermines developers’ ability to meaningfully contribute to AI safety research.

The boundaries of open-access AI in Africa 

Open-access AI is generally credited with cutting development costs. DeepSeek-R1, for example, rivals frontier proprietary models at a fraction of the inference costs (approximately only 3%). Despite apparent progress, the resources required to utilize open-access AI are still not in an arm’s reach in African contexts. Access to critical AI infrastructure, such as graphics processing units (GPUs) and cloud computing services, remains a crucial challenge. A study conducted on Zindi Africa data scientists revealed that only 1% of them have ‘on-premises’ access to GPUs. Whereas 4% pay for cloud access to GPUs but can mostly only afford access worth $1,000 per month, which translates to approximately two hours of daily usage for an old Nvidia A100 GPU. The remaining 95% rely on standard laptops without GPUs or access GPUs via free cloud-based tools, which impose cumbersome usage restrictions. Although GPU access is a global conundrum, Africa’s situation is dire. For perspective, a report highlighted the disproportionately high cost of GPUs relative to GDP per capita in different African countries. For instance, the cost of an Nvidia A100 GPU was 22% of GDP per capita in South Africa, 75% in Kenya and 69% in Senegal.

Furthermore, Africa’s access to foundational utilities for AI, such as energy and digital infrastructure, is still highly limited. Africa accounts for only 6% of global energy use. As of 2022, only 51.5% of the population in Sub-Saharan Africa had access to electricity. Moreover, Sub-Saharan African countries experience an annual average of 87 blackouts, in stark contrast to North America that averages at 1. Closely connected to this, internet access varies across Africa, with low-income countries like South Sudan, Burundi and the Central African Republic having penetration below 13%. However, optimistically, momentum for 5G is growing on the continent. Sub-Saharan Africa, for instance, is expected to have 226 million 5G connections in 2030, an adoption rate of 17%.

Additionally, African developers risk exclusion from global safety research networks, where advancements in AI knowledge are collaboratively exchanged, due to factors such as financial constraints and stringent visa requirements. The global inattention to AI safety research exacerbates the problem. AI safety research constituted a meagre 2% of the broader AI research landscape between 2017 and 2022. In Africa, AI funding primarily appears to be directed at finding AI solutions for developmental challenges such as healthcare, agriculture and education. Continental and national policy suggests that this emphasis is likely to persist.

In totality, African AI safety developers seeking to leverage open-access AI confront a multitude of deep-rooted resource constraints. Simultaneously, despite considerable AI investment being directed to the continent, the prospects for African AI safety developers remain dim due to the disinterest in AI safety within the region.

Framing AI safety issues 

Framing theory asserts that issues can be construed from different perspectives, and as having implications on multiple values. Linking an issue to people’s values fosters a sense of ownership, particularly when people perceive potential stakes. In Africa, harnessing AI for inclusive development is highly valued, given its socio-economic state and history of subjugation by dominant economies. To address resource scarcity, African AI safety researchers could secure AI funding allocated for developmental challenges, by emphasizing the risks that AI safety concerns pose to these solutions. African developers could, as an example, make a case for collaboration with AI stakeholders offering education solutions by underscoring the detrimental effects that misinformation and disinformation capabilities could have on system performance, student learning experiences and ultimately, organizational reputation. This would necessitate further research on effective framing.

African AI safety research collaboration 

In tandem, African developers should establish safety research networks. Networks like the European Network for AI Safety exemplify the power of collaboration. Similarly, in Africa, consortium-based collaboration has been fronted as a means to pool resources to build general AI capacity. Networks such as the Artificial Intelligence for Development (AI4D) program have been instrumental in supporting African AI researchers, innovators and policymakers by providing crucial funding, resources and collaborative opportunities to drive AI research. Such initiatives could serve as an avenue to coordinate safety efforts, for instance through distributed machine learning. This approach could allow African researchers to scale their algorithms for large datasets, share computational resources, save time and minimize redundancy. Moreover, the African Union (AU) could play a pivotal role in fostering collaboration on AI safety. Some researchers propose that the AU and its member states should establish for open computing access. Additionally, the AU could further support AI safety by promoting shared research initiatives across its member states.

Developing context-specific African AI safety  

African safety researchers could also identify “unmet needs” in model evaluation and develop niche expertise around them to enhance chances of gaining model access. This way, African safety researchers could not only address gaps in model evaluation but also position themselves as key contributors to the global AI ecosystem, making it more likely for external stakeholders to provide model access in exchange for valuable insights and expertise. Case in point, Anthropic engages crowd workers to red team and adversarially probe their models. However, one of the shortcomings of this is that evaluations can be inconsistent due to the varying characteristics in human evaluators. Building on research networks, African researchers could establish organizations with standardized policies for model evaluations to mitigate this problem.

Moreover, African countries and developers possess distinct attributes that may make them particularly well-suited for specific tasks. For example, Africa’s rich cultural, linguistic and demographic diversity could make it ideally suited for conducting robustness testing. African safety researchers could therefore be instrumental in designing “stress tests” that simulate African scenarios for AI models, which would be useful in building more resilient models. Another opportunity arises in testing multilingual models. Despite the increase in multilingual models, testing is mostly done in English. Hamza Chaudhry warns that this poses the risk of AI utilizing non-English languages in its dangerous capabilities such as misinformation and disinformation. Therefore, it is necessary that model testing also happens in non-English languages. Establishing teams of African evaluators that reflect Africa’s linguistic diversity to evaluate multilingual models could alleviate this concern, and this could serve as another point of expertise that could incentivize model producers to share model access with African AI safety researchers.

Conclusion 

While African AI safety researchers could take up small-scale interpersonal interventions to bypass open-access AI obstacles, open-access AI cannot entirely ensure that African countries meaningfully participate in AI safety governance. Comprehensive systemic interventions such as global AI benefit-sharing commitments are essential. African countries could also consider negotiating for model access in multilateral AI agreements with leading AI partners. Experts predict that AI will cause a seismic shift like no other historically revolutionary technology. Given the socioeconomic power AI yields, it is paramount that the power to govern it is fairly distributed.

Author

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).