Toward international cooperation on foundational AI models

An expanded role for trade agreements and international economic policy


Executive summary

Foundational AI presents new opportunities for social and economic flourishing, but also risks of harm

The development of artificial intelligence (AI) presents significant opportunities for economic and social flourishing. The release of foundational models such as the large language model (LLM) ChatGPT4 in early 2023 captured the world’s attention, heralding a transformation in our approach to work, communication, scientific research, and diplomacy. According to Goldman Sachs, LLMs could raise global GDP by 7 percent and lift productivity growth by 1.5 percent over 10 years. McKinsey found that generative AI such as ChatGPT4 could add $2.6-$4.4 trillion each year over 60 use cases, spanning customer operations, marketing, and sales, software engineering, and R&D. AI is also impacting international trade in various ways, and LLMs bolster this trend. The upsides of AI are significant and achieving them will require developing responsible and trustworthy AI. At the same time, it is critical to address the potential risk of harm not only from conventional AI but also from foundational AI models, which in many cases can either magnify existing AI risks or introduce new ones.

For example, LLMs are trained on data that encodes existing social norms, with all their biases and discrimination. LLMs create risks of information hazards by providing information that is true and can be used to create harm to others, such as how to build a bomb or commit fraud. A related challenge is preventing LLMs from revealing personal information about an individual that is a risk to privacy. Another higher risk from the misuse of LLMs is an increase in the incidence and effectiveness of crime. In other cases, LLMs will increase existing risks of harm, such as from misinformation which is already a problem with online platforms or increase the incidence and effectiveness of crime. LLMs may also introduce new risks, such as risks of exclusion where LLMs are unavailable in some languages.

International cooperation on AI is already happening in trade agreement and international economic forums

Many governments are either regulating AI or planning to do so, and the pace of regulation has increased since the release of ChatGPT4. However, regulating AI to maximize the upsides and minimize the risks of harm without stifling innovation will be challenging, particularly for a rapidly evolving technology that is still in its relative infancy. Making AI work for economies and societies will require getting AI governance right. Deeper and more extensive forms of international cooperation can support domestic AI governance efforts in a number of ways. This includes by facilitating the exchange of AI governance experiences which can inform approaches to domestic AI governance; addressing externalities and extraterritorial impacts of domestic AI governance which can otherwise stifle innovation and reduce opportunities for uptake and use of AI; and finding ways to broaden access globally to the computing power and data needed to develop and train AI models.

Free trade agreements (FTAs), and more recently, digital economy agreements (DEAs) already include commitments that increase access to AI and bolster its governance. These include commitments to cross-border data flows, avoiding data localization requirements, and not requiring access to source code as a condition of market access, all subject to exception provisions that give government the policy space to also pursue other legitimate regulatory goals such as consumer protection and guarding privacy. Some FTAs and DEAs such as the New Zealand-U.K. FTA and the Digital Economy Partnership Agreement include AI-specific commitments focused on developing cooperation and alignment, including in areas such as AI standards and mutual recognition agreements.

With AI being a focus of discussions, international economic forums such as the G7 and the U.S.-EU Trade and Technology Council (TTC), the Organization for Economic Cooperation and Development (OECD), as well as the Forum for Cooperation on Artificial Intelligence (FCAI) jointly led by Brookings and the Center for European Policy Studies as a track-1.5 dialogue among government, industry, and civil society, are important for developing international cooperation in AI. Initiatives to establish international AI standards in global standards development organizations (SDOs) such as the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) are also pivotal in developing international cooperation on AI.

But more is needed—where new trade commitments can support AI governance

These developments in FTAs, DEAs, and in international economic forums, while an important foundation, need to be developed further in order to fully address the opportunities and risks from foundational AI models such as LLMs. International economic policy for foundational AI models can use commitments in FTAs and DEAs and outcomes from international economic forums such as the G7 and TTC as mutually reinforcing opportunities for developing international cooperation on AI governance. This can happen as FTAs and DEAs elevate the output from AI-focused forums and standard-setting bodies into trade commitments and develop new commitments as well. FCAI is another forum to explore cutting-edge AI issues.

The following table outlines key opportunities and risks from foundational AI models and how an ambitious trade policy can further develop new commitments that would help expand the opportunities of foundational AI models globally and support efforts to address AI risks, including by building on developments in forums such as the G7 and in global SDOs.

Table 1. New commitments in FTAs, DEAs and for discussion in international economic forums

Table 1. New commitments in FTAs, DEAs and for discussion in international economic forums