Guidance

Joint statement on competition in generative AI foundation models and AI products

Published 23 July 2024

Logos of the 4 agencies: The European Commission, The UK CMA, The US Department of Justice and the US Federal Trade Commission.

Joint statement on competition in generative AI foundation models and AI products set out by:

  • Margrethe Vestager, Executive Vice-President and Competition Commissioner, European Commission

  • Sarah Cardell, Chief Executive Officer, U.K. Competition and Markets Authority

  • Jonathan Kanter, Assistant Attorney General, U.S. Department of Justice

  • Lina M. Khan, Chair, U.S. Federal Trade Commission

Working in the interests of fair, open, and competitive markets

As competition authorities for the European Union, the United Kingdom and the United States of America, we share a commitment to the interests of our people and economies. Guided by our respective laws, we will work to ensure effective competition and the fair and honest treatment of consumers and businesses. This is grounded in the knowledge that fair, open, and competitive markets will help unlock the opportunity, growth and innovation that these technologies could provide.

Sovereign decision-making

Our legal powers and jurisdictional contexts differ, and ultimately, our decisions will always remain sovereign and independent. However, if the risks described below materialize, they will likely do so in a way that does not respect international boundaries. As a result, we are working to share an understanding of the issues as appropriate and are committed to using our respective powers where appropriate.

A technological inflection point

We have all, in a variety of documents and fora, recognized the transformational potential of artificial intelligence, including foundation models. At their best, these technologies could materially benefit our citizens, boost innovation and drive economic growth. Although there are many unknowns about the precise trajectory these tools will take, generative AI has rapidly evolved in recent years, potentially becoming one of the most significant technological developments of the past couple of decades. Technological inflection points can introduce new means of competing, catalyzing opportunity, innovation, and growth. Accordingly, we must work to ensure the public reaps the full benefits of these moments. This requires being vigilant and safeguarding against tactics that could undermine fair competition. For example, there are risks that firms may attempt to restrict key inputs for the development of AI technologies; that firms with existing market power in digital markets could entrench or extend that power in adjacent AI markets or across ecosystems, taking advantage of feedback and network effects to increase barriers to entry and harm competition; that lack of choice for content creators among buyers could enable the exercise of monopsony power; and that AI may be developed or wielded in ways that harm consumers, entrepreneurs, or other market participants. Given the speed and dynamism of AI developments, and learning from our experience with digital markets, we are committed to using our available powers to address any such risks before they become entrenched or irreversible harms.

Risks to competition

While we recognise the great potential benefits from the new services that AI is helping bring to market, we also see risks requiring ongoing vigilance. Key to assessing these risks will be focusing on how the emerging AI business models drive incentives, and ultimately behaviour.

  1. Concentrated control of key inputs. Specialized chips, substantial compute, data at scale, and specialist technical expertise are critical ingredients to develop foundation models. This could potentially put a small number of companies in a position to exploit existing or emerging bottlenecks across the AI stack and to have outsized influence over the future development of these tools. This could limit the scope of disruptive innovation, or allow companies to shape it to their own advantage, at the expense of fair competition that benefits the public and our economies.

  2. Entrenching or extending market power in AI-related markets. Foundation models are arriving at a time when large incumbent digital firms already enjoy strong accumulated advantages. For example, platforms may have substantial market power at multiple levels related to the AI stack. This can give these firms the ability to protect against AI-driven disruption, or harness it to their particular advantage, including through control of the channels of distribution of AI or AI-enabled services to people and businesses. This may allow such firms to extend or entrench the positions that they were able to establish through the last major technological shift to the detriment of future competition.

  3. Arrangements involving key players could amplify risks. Partnerships, financial investments, and other connections between firms related to the development of generative AI have been widespread to date. In some cases, these arrangements may not harm competition but in other cases these partnerships and investments could be used by major firms to undermine or coopt competitive threats and steer market outcomes in their favour at the expense of the public.

Principles for protecting competition in the AI ecosystem

Our experience in related markets suggests that, while competition questions in AI will be fact-specific, several common principles will generally serve to enable competition and foster innovation:

  1. Fair dealing. When firms with market power engage in exclusionary tactics, they can deepen their moats, discourage investment and innovation by third parties, and undermine competition. The AI ecosystem will be better off the more that firms engage in fair dealing.
  2. Interoperability. Competition and innovation around AI will likely be greater the more that AI products and services and their inputs are able to interoperate with each other. Any claims that interoperability requires sacrifices to privacy and security will be closely scrutinized.
  3. Choice. Businesses and consumers in the AI ecosystem will benefit if they have choices among diverse products and business models resulting from a competitive process. This means scrutinizing ways that companies may employ mechanisms of lock-in that could prevent companies or individuals from being able to meaningfully seek or choose other options. It also means scrutinizing investments and partnerships between incumbents and newcomers, to ensure that these agreements are not sidestepping merger enforcement or handing incumbents undue influence or control in ways that undermine competition. For content creators, choice among buyers could limit the exercise of monopsony power that can harm the free flow of information in the marketplace of ideas.

Other competition risks associated with AI

We are mindful of other risks that can arise where AI is deployed in markets. These include, for instance, the risk that algorithms can allow competitors to share competitively sensitive information, fix prices, or collude on other terms or business strategies in violation of our competition laws; or the risk that algorithms may enable firms to undermine competition through unfair price discrimination or exclusion. We will be vigilant of these and other risks that might emerge as AI technology develops further. In light of these risks, we are committed to monitoring and addressing any specific risks that may arise in connection with other developments and applications of AI, beyond generative AI.

Consumer risks associated with AI

AI can turbocharge deceptive and unfair practices that harm consumers. The CMA, DOJ and the FTC, which have consumer protection authority, will also be vigilant of any consumer protection threats that may derive from the use and application of AI. Firms that deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy. Firms that use business customers’ data to train their models could also expose competitively sensitive information. Furthermore, it is important that consumers are informed, where relevant, about when and how an AI application is employed in the products and services they purchase or use.