Sections

Commentary

Smart AI regulation strategies for Latin American policymakers

May 22, 2025


  • Latin America has fewer legacy systems, growing tech ecosystems, and a strong tradition of social rights-based regulation. These conditions make the region a natural sandbox for inclusive and forward-looking AI governance.  
  • In a recent study, researchers lay out the key ingredients and essential steps for smart AI regulation from a Latin American viewpoint. 
"Inteligencia Artificial" (Artificial Intelligence) is seen on a sign at the "IT Joven" technology fair in Buenos Aires, Argentina, in April 2023.
"Inteligencia Artificial" (Artificial Intelligence) is seen on a sign at the "IT Joven" technology fair in Buenos Aires, Argentina, in April 2023. DPA Picture Alliance/Florencia Martin

Generative artificial intelligence (AI) is rapidly reshaping economies, industries, and public services. Its transformative potential is vast—but so are the risks. If left unchecked, AI can deepen inequalities, erode privacy, and widen digital divides. Importantly, regulation is not just a safeguard but a catalyst for development. Countries with clear frameworks—like the U.K. in financial technology and cybersecurity—tend to attract more investment and innovation. Regulation is also a geopolitical asset: Nations that define strong, transparent standards can shape global rulemaking and strengthen their position in tech diplomacy. 

AI regulation has evolved through three overlapping stages: 

  1. Ethical guidelines: Responsible tech principles (e.g., OECD´s AI principles, UNESCO´s recommendations).  
  2. National legislation: Formal regulatory frameworks (e.g., EU AI Act, NIST´s Management Framework). 
  3. Regional standards: Coordinated governance initiatives (e.g., Global Partnership on AI, EU-U.S. Trade and Technology Council).1  

Crucially, AI regulation does not move in a straight line. In early 2025, the United States took a decisive turn toward deregulation in January when President Trump signed an executive order that revokes earlier safety-focused guidance and tasks federal agencies with prioritizing innovation, national security, and economic competitiveness over precautionary oversight. This shift positions the U.S. as a counterweight to the EU’s more restrictive AI Act, creating a regulatory divergence that could deepen global fragmentation. Indeed, most multilateral AI initiatives—such as the Organization for Economic Co-operation and Development (OECD) principles or G7 Hiroshima Process—remain voluntary, reinforcing the importance of regional frameworks that can both innovate and coordinate. 

Latin American countries are beginning to develop AI regulations inspired by global benchmarks, but against a flurry of global regulatory milestones in 2024 and early 2025, progress in the region has been slow. Despite several initiatives—including two regional AI summits in 2024, Brazil’s dispute with Meta, and bills in Brazil and Chile—implementation lags behind developed countries and China in AI preparedness, according to a recent International Monetary Fund (IMF) index. This underscores the importance of early regulatory discussions—not only to address economic, privacy, and security concerns but also to prevent regulatory arbitrage through a harmonized framework. 

Yet timing is somehow on Latin America’s side. Without the burden of legacy systems or entrenched regulatory regimes, it can leapfrog into governance models that reflect local constraints while aligning with global standards. 

Effective regulation must strike a double balance—safeguarding rights while enabling innovation and designing frameworks that are both ideal and enforceable. 

Smart AI regulation in Latin America and the Caribbean (LAC)

Designing smart AI regulation requires policymakers to strike a double balance—safeguarding rights without stifling innovation and crafting policies that are both ambitious and enforceable. To guide this effort, our recent study proposes a four-part taxonomy that reflects both global practices and Latin America’s unique context. These ingredients and steps are also largely valid for other developing economies. 

Table 1

In addition to the taxonomy above, policymakers in Latin America must address a set of region-specific implementation challenges. 

Effective regulation requires more than good laws—it needs capable institutions. Many countries in the region lack the technical expertise and operational resources to audit AI systems, enforce compliance, or assess algorithmic risks. Building this capacity is essential. Governments should establish national AI safety units to oversee high-risk systems, train regulators in AI auditing and risk modeling, and create public-private sandboxes to test new applications in real-world conditions. Latin America must also address the market dynamics of AI development. Without targeted support, small and medium enterprises (SMEs) may be crowded out by dominant global tech players. Reducing compliance burdens, offering technical assistance, and fostering open-source ecosystems can help level the playing field and ensure that innovation emerges from across the region—not just from large incumbents. 

AI systems must also reflect Latin America’s socioeconomic and cultural realities. Flawed designs risk reinforcing inequalities in credit access, health care, education, and justice. For example, credit algorithms should incorporate alternative data—such as utility or mobile payment histories—to avoid penalizing the underbanked. In economies with high informality, AI systems must be designed to avoid excluding those who operate outside formal financial or employment systems by incorporating inclusive eligibility criteria and additional non-traditional data sources. For example, diverse datasets in health care can help prevent diagnostic bias in rural or underserved communities. In education, AI must not entrench privilege by relying solely on elite-school data. 

Cultural and linguistic diversity should be embedded in AI design. Systems built primarily in English may be inaccessible across the region’s multilingual populations. Ensuring fairness requires deliberate efforts to accommodate these differences. Transparency, ongoing auditing, and public oversight are also critical to maintain trust in AI, especially in high-stakes domains like justice and democracy. As generative AI intersects with elections and political discourse, transparent oversight mechanisms will also be key to protecting democratic institutions and countering disinformation. 

Data sovereignty is another key concern. With much of the region’s data infrastructure controlled externally, governments must find ways to protect sensitive personal and public information. One promising model is the U.K.’s National Data Library, which uses synthetic data to balance privacy with research access. Latin American governments could explore similar approaches to support innovation while respecting data protection laws. But national strategies alone are not enough. A fragmented regulatory landscape could invite jurisdictional arbitrage and undermine trust. Regional harmonization—through shared data standards, interoperable governance protocols, and collaborative oversight—can help create consistency across borders and build a stronger collective voice in global AI debates. Equally important, shared technical standards—including for auditing, explainability, and data quality—can give Latin America a greater voice in shaping how global AI is built, not just how it is regulated. 

Finally, regional cooperation will be crucial. A Latin American AI governance network—modeled on entities like the Pan American Health Organization—could foster regulatory harmonization, share technical capacity, and support joint innovation pilots. 

And given how rapidly AI is evolving—and how unpredictable its trajectory remains—regulation must be flexible, iterative, and continuously updated in response to new developments. 

A preliminary road map 

To translate these priorities into action, a phased roadmap that builds institutional capacity, engages citizens, and adapts over time in Latin America is proposed. 

Table 2

Five priorities for smart AI regulation in LAC 

Latin America stands at a pivotal moment: With its resilience, creativity, and lack of entrenched legacy systems, it can leapfrog into a new model of AI governance. To do so, five priorities should guide regional efforts: 

  • Enable innovation through regulatory sandboxes that reduce uncertainty and support responsible experimentation. 
  • Promote inclusion by investing in AI literacy and supporting open-source tools that broaden access. As open-source models proliferate, governments must also grapple with their dual-use risks—balancing openness with appropriate safeguards to prevent misuse. Investing in youth-focused AI literacy and upskilling programs is also key in preparing the region’s next generation of workers. 
  • Institutionalize safety by establishing national or regional AI safety institutes to oversee high-risk systems. 
  • Prevent monopolization by reducing regulatory burdens for SMEs and promoting competitive neutrality. 
  • Advance regional harmonization via shared data standards and cross-border regulatory frameworks. 

Future-proofing AI governance requires flexible, adaptive standards and participatory mechanisms. Citizen consultations, interdisciplinary advisory groups, and continuous stakeholder engagement can help ensure that regulation evolves alongside the technology. Participatory governance—through co-creation workshops, citizen input, and interdisciplinary advisory councils—can help align regulation with local values and ensure legitimacy across political transitions. Ultimately, smart AI regulation should be viewed as a development strategy—to unlock inclusive growth, modernize public services, and strengthen democratic institutions. 

Regulating AI in its exponential phase is a bit like trying to board a moving train. What matters as we run alongside it isn’t where the door is now, but where it will be when we leap. Anticipating and adapting to change is a critical part of economic development—and a must in times of technological change. 

Rather than merely catching up, Latin America can lead—not by replicating external models, but by prototyping a smart, inclusive, adaptable, and development-focused approach to AI governance that can inspire the Global South and beyond. 

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).