In recent years, high-profile international coordination on the governance and safety of artificial intelligence (AI) has grown, but often via club-based processes, like the G20 Hiroshima Process, the global AI summits in Bletchley Park, Seoul, and Paris, and the OECD AI principles. A handful of countries are driving decisions around development, access, and safeguards for AI systems, which will be felt worldwide.
Many Southeast Asian countries have had limited chance to engage in international discussions on AI—while Singapore chaired a session at the Bletchley Park summit, for example, Thailand and Vietnam were not even present. AI could be transformative for Southeast Asia by accelerating scientific research, improving public services, and boosting economic development. However, benefits come hand-in-hand with heightened safety and security risks, with AI potentially enabling more sophisticated scams and offensive cyber operations as these tools proliferate to malicious actors.
For its part, Southeast Asia is already laying groundwork for AI safety. The Association of Southeast Asian Nations (ASEAN) has released a framework outlining principles for AI governance and an expanded guide focusing specifically on generative AI. The Singapore AI Safety Institute (AISI), part of a larger international AISI network, participated in the first pilot for international testing of foundation models last November with U.S. and U.K. counterparts. There is also growing work in the region on multicultural and multilingual safety, such as the Typhoon2-Safety classifier to safeguard text generation in Thai. This matters because prompts in less commonly used languages can bypass normal safeguards in large language models (LLMs), making a vulnerability in one language a vulnerability everywhere.
Global North countries need to do more to engage Southeast Asia around AI risks, not just for the region’s benefit, but for their own as well. Transatlantic international AI safety efforts should partner with Southeast Asian stakeholders for regional expertise and to address unique threats, including organized cybercrime groups. Collaboration can localize safeguards, such as testing safety features in Southeast Asian languages, and expand capacity through compute credit sharing and talent exchange. Civic tech initiatives can also broaden public engagement in AI governance, ensuring the region’s voice in shaping global standards.
What are the consequences of Southeast Asia being left out of the global AI discussion? The phrase, “If you’re not at the table, you’re on the menu” comes to mind. A lack of representation on the international stage means that the region, and others like it, will be more vulnerable to risks from frontier AI systems—highly capable, general-purpose AI systems like OpenAI’s o1 reasoning models.
First, while many issues raised around the emerging risks of AI affect the region—such as enhanced cybersecurity risks—these are often divorced from local contexts. For instance, a more grounded estimate of human uplift from new AI cyber-offensive tools should consider operational details of the practices and constraints of specific threat actors, such as criminal groups operating scam centers in Myanmar, Cambodia, and Laos. Also, the impact of enhanced cyber capability for attackers should factor in how many countries in the region have more limited cybersecurity resources compared to North America and Europe.
Second, Southeast Asia is losing out on the opportunity to develop its own technical AI capability and know-how. Participation in cutting-edge R&D and technical standards-setting, through forums like the International Network of AI Safety Institutes, may incentivize domestic AI development. However, this will depend on stable and high-bandwidth internet access, energy supply, and advanced compute. For the region to truly engage, it must build out its local AI talent and infrastructure base.
Third, as access to AI tools becomes increasingly central in governance and societal adaptation to AI risks, solutions must be fit-for-purpose for Southeast Asia. For instance, the use of defensive AI to respond to cybersecurity threats should be localized in terms of focus areas (e.g., scams), with safeguards that work across the region’s main working languages. That is, they should be culturally relevant and accessible. Also, given the likely resource constraints of many Southeast Asian nations, AI tools must be reasonably cost effective.
Beyond government representation in global dialogue around frontier AI, the broader public in Southeast Asia is deeply affected by these policies. Technology is something that happens to them, not something the public helps shape by default, and low levels of AI literacy in many countries also make it less likely for the public to advocate for their interests. Across the region, we cannot necessarily expect existing public processes to meaningfully incorporate public views.
As the Paris AI Summit explored AI safety, more discussions around real threats to Southeast Asia should have been top of mind for attendees. One area that the region will be particularly sensitive to is how AI facilitates online scams and other deceptive practices. These efforts will be felt most keenly in the region, and they will ripple outward to the rest of the world. Therefore, one of the many harms that Global North countries may be invested in addressing is AI misuse—specifically, when it emboldens malicious actors operating in Southeast Asia.
Across the board, one trend to watch here is the emergence of “AI agents,” which are systems that can autonomously achieve real-world objectives with minimal oversight and human instruction. Though current large language models (LLMs) may increase some risks, malicious activities using LLMs are human-operated first and machine-supported second. More reliable and accessible autonomous agents could instead allow bad actors to operate on a stepped-up scale, posing novel regional safety challenges.
But to understand the impact of AI in Southeast Asia for now, we need to understand human actors first—in particular, organized crime, terrorist networks, and nation-states.
Criminal networks
Criminal networks engaged in scams and money laundering across Southeast Asia are already a significant problem. Dismissing these networks as just “scammers” can be a disservice to the scale of the problem: One UN report estimates that hundreds of thousands of people are involved as scam center operators, with many trafficked from elsewhere. A United States Institute of Peace report has warned that this industry “could soon rival fentanyl” as a criminal threat to the U.S., with Americans losing an estimated $3.5 billion to these actors in 2023 alone.
Generative AI could worsen the criminal threat, with UN experts already warning that AI is a “powerful force multiplier for criminal activities.” AI makes it easier for scammers to impersonate legitimate actors more at scale and convincingly, allowing them to hold multiple simultaneous conversations with victims, or to use persuasive emails, voice cloning, and video deepfakes to overcome victims’ natural skepticism. AI agents capable of executing end-to-end scams with relative autonomy could further boost criminals, though current agents are fortunately mediocre—only successfully executing scams end-to-end 36% of the time in one proof-of-concept.
Terror groups
Terror groups linked to the Islamic State and al-Qaeda have been a persistent problem for countries like Indonesia and the Philippines in past decades, and frontier AI could also provide operational uplift to groups like these. For example, despite limited evidence of AI’s direct use in terrorist operations, a UN report notes that terrorist groups remain interested in this technology. The Islamic State, for example, recently published a guide on securely using GenAI tools to produce propaganda. Lowering the costs of offensive cyber operations and proliferating offensive cyber capabilities would also make it more feasible for lesser-skilled actors, such as terror groups, to target critical national infrastructure.
Nation-states
ASEAN countries should also worry about AI uplifting other nation-states’ cyber capabilities. Even as Southeast Asian countries play a delicate balancing act between the U.S. and China, they are frequent cyber espionage targets for China, which is keen to get diplomatic, military, and economic intelligence on the region. Current AI systems are of limited effectiveness in the cyber domain, but ASEAN should watch closely for improvement in this domain and, in particular, keep an eye out for how future systems may increase any potential espionage threat from China.
To manage these new risks successfully, Southeast Asia should partner with the U.S. and U.K. and invest in three main thrusts of work: evaluation, localization, and adaptation. On top of this, ensuring a robust talent pipeline and fostering dialogue with the regional public will help the region stay ahead of AI’s evolving challenges.
As AISIs worldwide run more safety evaluations on frontier AI systems, they should be sure to partner with Southeast Asian actors, including local counterparts like the Singapore AISI, to integrate regional expertise. This should apply not just to cultural and linguistic diversity, but also to Southeast Asia’s knowledge of the operating environment for regional threats. For example, safety evaluations on cybersecurity or persuasion should account for the creativity and constraints of organized criminals in Southeast Asia.
Frontier AI developers and AISIs should also work with regional actors to localize safeguards. For example, companies like OpenAI may want to conduct robust testing in Southeast Asian languages to ensure that criminal groups cannot take advantage of these languages to bypass safety guardrails against misuse. Ultimately, this could lay the foundation for global standards. For instance, international frameworks on agent identifiers that signal if you are interacting with an AI agent could help counter potential misuse for scams.
Developers of defensive AI applications, such as LLM-based vulnerability scanning tools, should also search for ways to expand their business in Southeast Asia by adapting to a regional context. AI tools are inherently expensive, which makes scaling them up in Southeast Asia difficult, given challenges like limited public resources and a fragmented digital environment. But as AI brings new risks, it will also bring new opportunities—as the U.S. and U.K. seek to harden their infrastructure against new cyber-AI threats, for example, Southeast Asian governments might be keen to do the same. Western tech companies should remain attuned to opportunities for localization, lest they face disruption by Southeast Asian competitors, much like Grab ousted Uber from the region by better addressing regional needs. (Regional tech companies and governments, meanwhile, might want to assess opportunities to foster such disruptive innovation themselves.)
To support AI safety and security research efforts, Southeast Asian countries should also partner with frontier AI companies and Western countries, on issues such as compute credit provision to train local models or undertake safety research. Given the staggering costs of developing frontier AI systems from scratch—training runs for frontier AI models can cost billions of dollars—this option would be economical and allow countries to engage in this work more quickly. Additional support could be provided to develop and implement talent programs and enable talent transfer, such as exchange programs for technical doctoral degrees and senior researchers.
Finally, there is an opportunity to develop mechanisms for the Southeast Asian public to participate in AI policy discussions. Here, “civic tech” can be just as important as security and safety for protecting the public interest. One early experiment in using digital tools to enhance collective decision-making on tech policy, for example, was a pilot run by vTaiwan, Chatham House, and the AI Objectives Institute that leveraged an online deliberation platform, Polis, and LLMs to map consensus around various AI governance issues.
Using civic tech tools and trialing new processes can result in broader and sharper democratic inputs to guide decision-makers. Also, lowering the barriers around participation in AI policy conversations means that the interested public becomes more familiar with and educated on issues that are likely to impact them most. Unlike with the rise of social media platforms, the Southeast Asian public should be able to have a voice in AI governance.
As the U.S. and U.K. dominate the conversation around AI safety, it is important to consider the role of the Southeast Asian region in shaping norms, values, and practices to existing and emerging models. By applying an all hands on deck strategy, Southeast Asia can help steer AI toward safer, clearer waters.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
AI safety needs Southeast Asia’s expertise and engagement
February 14, 2025