Sections

Commentary

What happens when AI companies compete with their customers?

March 12, 2026


  • The big AI companies are moving into the application space, meaning they will begin competing with the AI app developers working in established software companies and in startups.
  • This poses an inherent conflict of interest, and the anticompetitive danger is that the platform operator will degrade or deny service to its rivals while giving itself full or preferential access.
  • U.S. antitrust laws are unlikely to address this—though bans on unreasonable discrimination against similarly situated customers could help promote innovation to the fullest extent possible. 
HANOVER, GERMANY - MARCH 31: The Google Cloud and Microsoft pavilions stand next to each other at the 2025 Hannover Messe industrial trade fair on March 31, 2025 in Hanover, Germany. The fair, which showcases German and international industrial production, is taking place as trade relations between the European Union and the United States are becoming fractured by the threat of tariffs by the administration of U.S. President Donald Trump.

As the big artificial intelligence (AI) companies standardize their general-purpose models, they are rushing to the application level to capture value and recover the enormous infrastructure expenditures needed to train and operate general-purpose AI models. In doing this, the companies are beginning to compete with their own most valuable customers: the AI app developers working in established software companies and in startups. Each of the major AI model providers has an explicit policy allowing them to disconnect competing AI app developers. This creates an inherent—and familiar—conflict of interest that traditional antitrust tools, such as nondiscrimination rules, could mitigate or reduce. Congress is unlikely to adopt any protective measures in the current political environment, however, and current antitrust law is unlikely to provide remedies. This leaves AI application developers at risk of losing access to their development platform whenever they threaten the AI app business plans of the big AI model providers.

AI model companies are oligopolists producing commodity products

As internet analyst and investor Mary Meeker noted in an influential May 2025 report, the economics of general-purpose large language models (LLMs) look like commodity businesses. The different AI models, she argued, are rapidly converging in capability, complicating AI companies’ efforts to distinguish themselves. Moreover, in today’s “fast-follow” environment, she said, innovations are quickly copied by any adequately resourced competitor. Just last week, Intuit CEO Sasan Goodarzi said the same thing: “The reality is, [large language models] are commodities.”

Moreover, the market for general-purpose LLMs is an oligopoly, dominated by three AI model providers: Google, OpenAI, and Anthropic. According to an analysis by Menlo Ventures, these three AI companies controlled almost 90% of the $37 billion market for enterprise customers by the end of 2025. Anthropic commanded a 40% market share, followed by OpenAI at 27% and Google at 21%. The exact placement has shifted, with Anthropic and Google growing dramatically over the past two years and displacing OpenAI, the former market leader. But the concentration remained. Meta and xAI have their own AI models, and there are open-source models out of China and France, but they are not significant players.

In a recent interview, Anthropic CEO Dario Amodei acknowledged the oligopolistic market structure for general-purpose AI models and attributed it to the large costs of developing AI models. And, indeed, AI infrastructure expenses are astronomical. In July 2025, market analyst Paul Kedrosky estimated that AI capital expenditures amounted to 1.2% of the entire U.S. gross domestic product. Bloomberg estimates that Microsoft, Meta, Alphabet, and Amazon will spend $610 billion in capital expenditures in 2026—about triple what they spent just two years ago. Only a few companies are entering this market. Even a company as well-resourced as Apple chose to partner with Google for its AI model needs instead of spending hundreds of billions or trillions to develop another indistinguishable foundation AI model.

Analysts wonder where the revenue to cover these capital expenses will come from, leading investors to worry about being caught up in an AI bubble. According to estimates from J.P. Morgan, $650 billion in annual revenue from AI products will be needed on top of the projected $5 trillion global investment in AI infrastructure for investors to get a reasonable 10% annual return. OpenAI alone is on track to spend $1.4 trillion in the next eight to 10 years, while its annual revenue is around $20 billion.

The move to AI applications

In January, Mihir Kshirsagar, an attorney at Princeton’s Center for Information Technology Policy, made the connection between commodity pricing, AI infrastructure spending, and the move to applications in an insightful analysis for Tech Policy Press. If model provision becomes a commodity margin business, he said, then AI model developers “need application-layer revenue to justify the capital expenditure” in data centers, chips, and energy. As Kedrosky said in a recent analysis of model commoditization and the new AI stack, “applications become the margin layer.”

As if on cue, in February, Anthropic announced just such an increased movement to the AI application layer. It released specialized tools for legal and financial work, primarily through plugins for its Claude Cowork AI platform. They are designed to automate document-heavy, routine tasks for professionals.

In devising these specialized tools, Anthropic doubled down on its successful Claude Code strategy. Basic Anthropic AI models were always used by programmers to generate software code, but the Claude Code application, launched in February 2025, was a much more attractive, specialized programming tool built on top of Anthropic’s underlying model. It was a spectacular success. Programmers had previously used Microsoft’s GitHub Copilot and Cursor, a coding app built largely on top of Anthropic’s general-purpose AI models, for generating useful code, but Anthropic’s Claude Code provided an attractive alternative and competed directly with Cursor, one of its own best customers. According to CB Insights, these three companies now have roughly equal shares of the market for AI coding tools and collectively control over 70% of it.

The stock market reaction is a clue to the competition problem inherent in this entry by a leading AI model developer into broad swaths of the AI application industry. Specialized software service providers RELX and Thomson Reuters lost significant stock value following Anthropic’s announcement. The stock sell-off was probably an overreaction since most legal firms and financial enterprises are not going to program their own mission-critical software using Anthropic’s AI applications and will still want an external vendor to monitor and validate the software used for these consequential processes. But there is no question that Anthropic has ventured into several different application markets where they are competing directly with companies like Thomson Reuters. In doing so, it has invaded the territory of its own business customers and the software companies that currently access general-purpose AI models to craft financial and legal services applications. As Richard Walters at the Financial Times notes, these legal and financial firms “may soon become their competitors.”

Competing with their own customers

In his Tech Policy Press piece, Kshirsagar warned that by “building applications that compete directly with developers on their platforms,” AI model companies are “competing with their own customers.” He urged policymakers to be alert for conflicts of interest when the AI companies “move up the stack into applications their own customers are trying to build.” In recent commentary for Brookings, former Federal Communications Commission Chairman Tom Wheeler observed that AI model developers have unilateral control over their models that “lets them steer innovation toward their own interests, shape the marketplace in their favor, and limit competitive threats.”

In principle, there should be no issue with a company competing with its own customers. Amazon competes with its merchants, Microsoft competes with personal computer software developers that write for its operating system, and Google and Apple compete with mobile app developers that sell apps on their proprietary app stores. As long as access to the underlying platform remains open, competition can proceed fairly.

But there is an inherent conflict of interest when a company provides a key input that its customers rely on and also uses the same input to compete with them. The anticompetitive danger is that the platform operator will degrade or deny service to its rivals while giving itself full or preferential access. Indeed, each of the above digital platform companies—Amazon, Microsoft, Google, and Apple—have faced charges that they discriminated against their customers to advantage their own products.

This is not a speculative danger in the market for access to general-purpose AI models. Anthropic has an explicit policy of not allowing competitors access to its models. Its commercial terms of service say that an Anthropic customer “may not and must not attempt to…access the Services to build a competing product or service, including to train competing AI models or resell the Services except as expressly approved by Anthropic.”

This policy forbids training rival general-purpose AI models, but by these terms, it also applies to a company using Anthropic models to “build” an AI application that competes with an AI application provided by Anthropic. Anthropic could decide that the use of Anthropic’s models by Cursor, or any other company providing a coding app, or the use of its AI model to provide a legal or financial services app, are prohibited attempts to build a competing product or service.

Anthropic has enforced this policy several times in the past year. As documented in a report by the Vanderbilt Policy Accelerator, when a startup firm that built a coding app on top of Anthropic’s AI models was on the verge of being bought by OpenAI in April 2025, Anthropic cut off access to its models. Anthropic’s chief science officer said at the time, “I think it would be odd for us to be selling Claude to OpenAI.” But perhaps more importantly, Anthropic was just ramping up its own coding agent, Claude Code, aiming to sell it to the very same enterprise customers and developers that Windsurf was targeting.

In August 2025, Anthropic cut off OpenAI’s access to its Claude models saying the company had used the models to test GPT-5 ahead of its launch. OpenAI connected Claude to internal tools so the company could compare Claude’s performance to its own models in coding, writing, and safety. In January, Anthropic blocked xAI’s access to its Claude model. xAI was using the AI coding tool Cursor to help train and test its model, thereby accelerating its own development, and Cursor used the Claude model to provide this service to xAI.

Anthropic reserves the right to sever the connection to its models the moment usage threatens its competitive advantage or business model. In a recent interview, Amodei noted that other companies write models that are used for coding, which is one of Anthropic’s prized markets. He then said, “We’re not perfectly good at preventing some of these other companies from using our models internally.”

Google has a similar policy for its AI developers, warning that they “may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio).” OpenAI also forbids users from using the output of its AI models “to develop models that compete with OpenAI.”

This should be a wake-up call to developers—large and small, independent or enterprise-based, startup or incumbent—that Anthropic, OpenAI, or Google can just cut off access to the AI model they need to have a viable AI application. If one of the AI model developers wants to move into an application market already served by existing AI application companies, those AI app companies are at risk of having their model access degraded or eliminated entirely. If an AI app company wants to challenge an AI model developer in a market the model developer is in or has targeted, the AI app company might find itself excluded from the AI model it needs to compete.

Will ‘refusal to deal’ work?

Refusals to deal can decapitate rivals only when the rivals have nowhere to go for an alternative. Switching costs can make customers reluctant to abandon a preferred vendor, but if the advantages of going elsewhere are large enough or if they are cut off from their vendor, they will switch. This is a lesson the U.S. Department of Defense is learning as it considers uncoupling from its preferred AI vendor, Anthropic.

This ability to switch vendors suggests that an AI model company’s refusal to deal with a competitive AI app developer might not work in today’s AI marketplace. Right now, AI application developers have alternatives. Over the past few months, software programmers have noticed a step-change in the performance of AI models used for coding, but the performance leaps are present in all the latest models: Opus 4.5 from Anthropic but also GPT-5.2 (and now GPT-5.3) and Gemini 3. If Anthropic cuts off an AI app developer, the app developer can always go to one of these equally capable models for the base of a coding app.

Of course, an oligopoly means there are only a few choices. All of them might enforce their exclusionary policies, especially now that they are all headed to the same potentially lucrative enterprise app market. OpenAI is pivoting to the business world. It has developed its own coding app, called Codex, which it updated in early February. Shortly thereafter, it released Frontier, an AI platform for business, that utilizes the GPT-5 model series to allow enterprises to build, deploy, and oversee software programs that can automate many business workflow functions. Gemini is also in the enterprise application business with Gemini Code Assist, Antigravity, and a suite of business applications marketed under the Gemini Enterprise brand.

But if domestic closed-model companies obstruct competitive developers, as their current policies allow, low-cost open-source models from China remain a viable alternative. While in-house enterprise developers currently do not use these models to any great extent, startups and independent developers welcome them. Martin Casado, a partner at the venture capital firm a16z, says that there is an “80% chance” that the startups seeking support from his firm are “using a Chinese open-source model.” Zhipu’s open-source GLM-5, for instance, has coding capabilities similar to those of U.S. models, and access to this cutting-edge model for app developers is dramatically less expensive than access to comparable U.S. models. These low-cost models would provide a way around an attempted AI model blockade by U.S. companies, allowing new ventures or established software firms to provide AI applications to enterprise customers.

What should be done?

An exclusionary policy might not work in today’s AI model marketplace, but this does not mean all is well and nothing should be done. AI model developers want to be both trusted business partners and deadly commercial rivals. And at least one of them has already weaponized its platform control to prioritize competition rather than cooperation. This genuine competitive risk calls for a targeted intervention rather than a blind hope that anticompetitive strategy will fail.

It makes no sense to ban AI model developers from the AI application market. Especially in areas like coding, they bring unique skills and resources that can benefit AI application customers. Rather than ban such an arrangement, the Vanderbilt report recommends a pro-competition rule that bans unreasonable discrimination against similarly situated customers. This might mitigate the inherent conflict when an AI model provider is also an AI application developer and preserve the platform neutrality that the AI application market needs to promote innovation to the fullest extent possible.

Current U.S. antitrust law provides little hope for such a policy solution. Refusals to deal with competitors are not necessarily illegal under a famous Supreme Court decision, even for companies with monopoly power. It would take a new law, such as the 2022 American Innovation and Choice Online Act, to initiate such a pro-competitive reform. That is not likely in the current political climate. And that means for the foreseeable future, it is caveat AI app developer—beware if you are competing with your model provider.

  • Acknowledgements and disclosures

    Amazon, Google, Meta, and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).