Sections

Research

Market concentration implications of foundation models: The Invisible Hand of ChatGPT

Editor's note:

This is a Brookings Center on Regulation and Markets working paper.

Executive Summary

Foundation models are large artificial intelligence (AI) models that can be adapted for use in a wide range of downstream applications. As foundation models grow increasingly capable, they become useful for applications across a wide range of economic functions and industries. Ultimately, the potential market for foundation models may encompass the entire economy. This implies that the stakes for competition policy are tremendous.

We find that the market for cutting-edge foundation models exhibits a strong tendency towards market concentration: The fixed costs of training a foundation model are high, and the marginal cost of deploying them are very low. This means that there are large economies of scale in operating foundation models—the average cost of producing one unit of output declines the greater the scale of deployment. There are also some economies of scope—it is cheaper for one AI company to produce multiple foundation models for different uses than for multiple AI companies to cater to these uses separately. First-mover advantages in the market for foundation models are high, although they require large ongoing investments in product deployment, marketing, and distribution. Other barriers to entry, such as limited resources like talent, data, computational power, and intellectual property protections also create forces that point towards natural monopoly.

One particular concern for competition policy is that the producers of foundation models could expand their market power vertically to downstream uses, in which competition would otherwise ensure lower prices and better services for users. Producers may also erect barriers to entry or engage in predatory pricing, which may make the market for foundation models less contestable. The negative implications of excessive concentration and lack of contestability in the market for foundation models include the standard monopoly distortions, ranging from restricted supply and higher prices to the resulting implications for the concentration of economic power and inequality. Moreover, they may include the systemic risks and vulnerabilities that arise if a single model or small set of models are deployed extensively throughout the economy, and they may give rise to growing regulatory capture. On the other hand, concentration in the market for foundation models may allow producers to better internalize any potential safety risks of such systems, including the risks of accidents and malicious use since competitive pressures might induce producers to deploy AI products more quickly and invest less in safety research. The rise of open-source models mitigates concerns about market concentration but comes with its own set of downsides, including growing safety risks and the potential for abuse by malicious actors.

We conclude that regulators are well-advised to adopt a two-pronged strategy in response to these economic and safety factors:

First, as the market for the most capable foundation models has characteristics that point towards market concentration, it is important to ensure that it remains contestable and that incumbents do not engage in strategic behavior to deter innovation and the entry of new firms, e.g., via predatory pricing or strategic lobbying. Regulators must pay particular attention to risks arising from vertical integration with both upstream and downstream producers. The equilibrium market structure may be a single, or a small number of, producers of leading foundation models, which would enable many actors across different sectors of the economy to fine-tune or deploy their models in downstream applications. Attention also needs to be paid to the market for downstream applications to remain competitive.

Second, as natural monopolies or oligopolies, producers of the most advanced foundation models may need to be regulated akin to public utilities. Since market forces are blunted in the presence of market concentration, regulators need to ensure that users experience reasonable pricing, high quality standards (including safety, privacy, non-discrimination, reliability, and interoperability standards), as well as disclosure and equal access rights. Moreover, the regulators of foundation models need to recognize the growing systemic importance of these systems as they are deployed in increasingly important roles throughout our economy.

Regulators should also ensure that AI products and services compete on a level playing field with non-AI products and services, including human-provided services. Sectoral regulations on liability, professional licensing, and professional ethics should apply equally as is appropriate to both AI and non-AI solutions. For instance, hiring decisions and credit decisions must be subject to the same rules against discrimination and bias, no matter whether they are made by humans or AI. Likewise, financial advice should be subject to similar kinds of regulation regardless of whether it is provided by humans or AI. This may reduce AI use in some sectors, and simultaneously avoid the degradation of service standards through the use of AI.

Download the full working paper here.

  • Acknowledgements and disclosures

    Microsoft, Google, Meta, and Amazon provide support to The Brookings Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

    Jai Vipra was supported by a fellowship with the Centre for the Governance of AI. Other than the aforementioned, the authors did not receive financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. The authors are not currently an officer, director, or board member of any organization with a financial or political interest in this article.

    The Brookings Institution is financed through the support of a diverse array of foundations, corporations, governments, individuals, as well as an endowment. A list of donors can be found in our annual reports published online here. The findings, interpretations, and conclusions in this report are solely those of its author(s) and are not influenced by any donation.