Research and analysis

CMA AI strategic update

Published 29 April 2024

Introduction

The CMA is an independent non-ministerial UK Government department and is the UK’s principal competition and consumer protection authority. We help people, businesses and the UK economy by promoting competitive markets and tackling unfair behaviour.

AI innovation is delivering benefits for businesses and consumers, but also presents challenges. AI could hold genuinely transformative promise for our societies and economies, massively increasing productivity, transforming many existing products and services for both businesses and consumers, and bringing to market new innovations and as yet unimagined technology developments across all sectors of the economy. However, without fair, open, and effective competition and strong consumer protection, we see a real risk that the full potential of organisations or individuals to use AI to innovate and disrupt will not be realised, nor its benefits shared widely across society.

The CMA has a vital role in ensuring that consumers, businesses, and the wider economy reap the benefits of developments in AI, while harms are mitigated. AI-powered products and services are likely to become increasingly prevalent across a wide range of sectors and are therefore potentially relevant to all of the CMA’s current functions, which are not limited to specific sectors. AI is also clearly relevant to the CMA’s anticipated new digital markets functions.

This document provides a strategic update on the CMA’s approach to AI. This is set out through the following sections:

  • the CMA’s understanding of the risks posed by AI

  • how the CMA is addressing risks (including how the CMA’s competition and consumer remit applies to AI)

  • the CMA’s AI capabilities

  • forthcoming changes to the CMA’s powers

  • how the CMA is working with others on AI issues

  • the CMA’s next steps

The CMA’s understanding of the risks posed by AI 

The CMA has been considering the potential impact of AI and related matters on competition and consumer protection issues for a number of years. In 2021, we published the paper ‘Algorithms: How they can reduce competition and harm consumers’, where we explained that algorithms and algorithmically-driven AI systems were already an integral part of how many markets and firms operate, and identified risks for competition and consumer protection which we detail further below.[footnote 1]

In 2022, our horizon scanning highlighted AI foundation models (FMs) as an important emerging technology and possible driver of competition and consumer risks.[footnote 2] This led to our commencement of an extensive programme of research into and market monitoring of FMs and their possible impact on competition and consumer protection, as well as a significant programme of engagement with industry, civil society, academia and ongoing engagement with UK and international regulators. Our work on FMs has allowed us to develop a rich understanding of how FM markets work, and the current and possible future implications for competition and consumer protection. Our work on FMs has included an initial report published in September 2023 (in summary, short and full versions), as well as an April 2024 update paper and technical update report.[footnote 3] Figure one, taken from our recent foundation models update and technical reports, highlights the complex foundation model value chain that AI-powered services are built upon.

Figure 1 – Overview of the FM value chain[footnote 4]

Image description can be found below under Alt Text.

Risks to competition

Taking a broader view of AI systems, firms’ misuse of AI and other algorithmic systems, whether intentionally or not, can create risks to competition often by exacerbating or taking greater advantage of existing problems and weaknesses in markets. To take 3 examples:

  • AI systems that underpin recommendations or affect what choices customers are shown and how they are presented may strongly affect market outcomes and competition. If these systems are not designed and implemented with care, they may distort competition by giving undue prominence to choices that benefit the platform at the expense of options that may be objectively better for customers and consumers (for example, anti-competitive self-preferencing). This risk is particularly acute where firms are reliant on the systems of large gateway platforms to reach customers

  • firms may use algorithms and AI systems to assist in setting prices in a way which could facilitate collusion and sustain higher prices[footnote 5]

  • firms may use AI systems to personalise offers to customers, and this could also have risks for competition. For instance, incumbent firms could analyse which customers are likely to switch, and use personalised offers, selectively targeting those customers most at risk of switching, or who are otherwise crucial to a new competitor, which could make it easier for such firms to exclude entrants[footnote 6]

On the competition risks around FMs, our strongest concerns arise from the fact that a small number of the largest incumbent technology firms, with existing power in the most important digital markets, could profoundly shape the development of AI-related markets to the detriment of fair, open and effective competition. This could ultimately harm businesses and consumers, for example by reducing choice and quality, and by raising prices. It also matters because diversity and choice underpin resilience in our economy and avoid over-dependence on a handful of major firms – a particularly critical concern considering the breadth of potential use cases of FMs.

Some of these incumbent firms have strong upstream positions in one or more critical inputs for FM development as well as control over key access points or routes to market for FM services (including downstream AI-powered applications). They may also have market power in other digital markets which may be threatened by disruption and innovation from AI. This combination could mean that these incumbents have both the ability and incentive to shape the development of FM-related markets in their own interests, which could allow them both to protect existing market power and to extend it into new areas. Figure 2 below shows how some of those large tech firms are active across different levels of the FM value chain to varying degrees.

Figure 2 – Examples of GAMMA firms across the AI FM value chain[footnote 7]

Image description can be found below under Alt Text.

In our update report on FMs, published on 11 April 2024, we identified the following 3 key interlinked risks to fair, open and effective competition:

  • firms that control critical inputs for developing FMs may restrict access to them to shield themselves from competition

  • powerful incumbents could exploit their positions in consumer or business facing markets to distort choice in FM services and restrict competition in FM deployment

  • partnerships involving key players could reinforce or extend existing positions of market power through the value chain

Risks to consumers

AI has significant scope to facilitate unfair consumer practices. While AI-powered services may benefit consumers by providing higher quality, lower priced and potentially more personalised products and services, they also have significant scope to facilitate unfair consumer practices. For example:

  • consumers could be exposed to significant levels of false and misleading information, either due to FMs and AI systems generating such information (‘hallucinations’),[footnote 8] or because AI-based technologies enable bad actors to create false or misleading information more easily, at lower cost and greater scale, or increase the effectiveness of consumer fraud.[footnote 9] Firms could do this deliberately through existing unfair practices such as subscription traps, hidden advertising, or fake reviews, or could develop new ways of acting unfairly towards consumers

  • more broadly, AI-enabled personalisation, such as personalised pricing or personalised offers, are often difficult to detect and can be harmful if they unfairly target vulnerable consumers or have unfair distributive effects[footnote 10]

There is limited current research as to how consumers understand AI systems’ outputs and limitations, and how they relate to them. However, a number of key uncertainties may exacerbate these risks:

  • consumers may have difficulties in discerning AI-generated from human-generated content. They may generally find it difficult to tell the difference between information that is faithful and factual or not when it is AI-generated

  • disclosure as to when consumers are interacting with AI and what they are told is not consistently clear. Watermarking is one technical solution being developed to label AI-generated content, but limitations may be hard to overcome (either technical ones, such as the difficulty of watermarking, or bad actors faking watermarking to mislead consumers about the source of an output)

  • even if consumers are informed of the limitations of AI applications, it is possible that they may still over or under rely on them resulting in harm. For example, if consumers buy goods or services which have been inaccurately or misleadingly described as a result of unreliable AI-enabled customer interfaces, or if they lose confidence in AI products or services leading to them missing out on the benefits of those products or services

Insufficient transparency and unclear accountability in the AI value chain may prevent deployers and consumers from understanding the risks and limitations of AI or AI-enabled services, and reduce the likelihood that issues are addressed. Algorithmic systems are produced, deployed, and used within a supply chain of multiple actors, in which each contributes in different ways to the production, deployment, use, and functionality of complex systems. This can lead to uncertainty around accountability and responsibility for any failure at different points in the system, and who ultimately is responsible. Upstream developers need to provide sufficient information about AI models to enable downstream deployers to make informed assessments, and deployers need this information to provide the right information to users.

Continuing research

The CMA will continue to research impacts on competition and consumer protection in AI-related markets. For example, the CMA is doing further work to explore the impact of AI on choice architecture within digital markets. Our work has already highlighted several practices, such as personalisation, default settings, and framing, that large firms may use to either gain or maintain market position, which could stifle competition and distort consumer choices.[footnote 11]

We will be vigilant of new risks arising as AI technologies develop. Our Data, Technology and Analytics (DaTA) unit and our Digital Markets Unit (DMU) established a Technology Horizon Scanning Function that has to date focused on emerging technologies. In addition to the work on FMs already mentioned, in December 2023, the function published its first horizon scanning report on 10 trends in digital markets and how they may develop over the next 5 years and beyond.[footnote 12] This report highlighted the rapid and widespread deployment of FMs, technology convergence driven by the integration of AI services, the market for AI and computer chips, the increased development of open-source AI, and related impacts on competition and consumer issues. We also contribute to the horizon scanning work of the Digital Regulatory Cooperation Forum (DRCF).[footnote 13]

The CMA’s approach to AI risks

To ensure that people, businesses, and the wider economy benefit from the innovation AI can bring, businesses must comply with existing competition and consumer protection law. The CMA has a range of different functions under UK competition and consumer protection laws aimed at identifying and tackling competition and consumer protection concerns across all UK markets, including those in which AI is playing or will play a role.

  • UK competition law[footnote 14] gives the CMA certain powers to address competition concerns it becomes aware of in a number of ways, including by enforcing prohibitions against anti-competitive behaviour, reviewing mergers within the CMA’s jurisdiction and assessing their impact on competition, and investigating the operation of markets to identify any adverse effects on competition

  • the CMA also enforces legislation to protect consumers against unfair commercial practices and unfair terms, which we typically use to address systemic and market wide failures

CMA AI principles

In our initial FM review in September 2023, we considered potential impacts on competition and consumer protection and proposed a set of principles to guide the sector towards positive outcomes in both areas. Given the range of developments across the FM ecosystem, and our deepened understanding of the sector through our own research and continued engagement with stakeholders, we have updated our CMA AI principles to guide the ongoing development and use of FMs and ensure that competition and consumer protection remains effective. These principles also apply to the downstream impact of FM-powered AI services.

Each of the 6 principles (noted below) applies across the whole FM value chain. In our update report, we have urged firms to align their business practices with the principles, and to work with us to shape positive market outcomes. In this way, fair, open and effective competition can thrive, and consumers, businesses and wider society can reap the full benefits of this transformative technology.

Figure 3 – CMA AI Principles

Image description can be found below under Alt Text.

Our principles are intended to complement the UK Government’s approach and its cross-sectoral AI principles but are focused (per the CMA’s remit) on the development of well-functioning economic markets that work well from a competition and consumer protection perspective.[footnote 15]

The CMA’s AI capabilities

We are ensuring that we have access to the skills and experience necessary to keep abreast of the rapid speed of change in AI markets and potential AI deployment across the economy is a priority. All this together requires the CMA to have significant AI-related expertise to evaluate, investigate and (if necessary) take enforcement action in relation to AI systems.

We have grown our capacity and skills. Our specialist DaTA unit now has over 80 people including data scientists and data engineers, technologists, behavioural scientists, and digital forensics specialists. A number of colleagues across DaTA have expertise relevant to understanding AI and its implications for consumers and competition, as well as for deploying it in real-world settings. Our economics and legal teams have built up significant experience working on digital market issues, and we already have around 70 people in the DMU which we have set up in shadow form to deliver our forthcoming digital markets competition regime (see details on this below). We have a phased recruitment plan to build up to a total of around 200 people working across the CMA to implement the digital markets competition regime from the point of commencement, and a learning and training programme to ensure that we continue to build our skills and capacity to prepare for the regime.

We have also appointed 9 Digital Experts as independent advisors to the CMA’s digital work. The Digital Experts have diverse areas of expertise and collectively they possess direct experience working within or alongside large technology firms, insight into building new regulatory functions as well as technical expertise relating to digital technologies, including AI. They are already providing valuable inputs to our digital casework, with in-depth project support, strategic insight and team capacity building.

The CMA is also taking steps to use AI to improve how we operate. The CMA’s Technology and Business Services (TBS) team and its DaTA unit are co-leading an ongoing Digital Transformation programme, ensuring the CMA develops a collaborative and people centred approach to AI adoption across its internal working processes and policies. We also recently produced a framework for how the CMA itself should use AI underpinned by 4 principles, namely – Safety, Ethics, Transparency and Accountability. As part of this, we are piloting the use of AI-based tools to support evidence review, which is an essential activity underpinning the CMA’s case work. Different tools and techniques are being tested with the help of frontline staff to identify those that best meet the CMA’s use cases in this space.

The Digital Markets, Competition and Consumer Bill

The CMA’s ability to protect competition, consumers and promote growth in the UK economy will be enhanced once the Digital Markets, Competition and Consumers (DMCC) Bill comes into force.

Notably, the DMCC Bill anticipates new powers for the CMA directly to enforce consumer protection law against infringing firms and envisages significant financial penalties for non-compliance. We are ready to use these new powers to raise standards in the market and, if necessary, to tackle firms that do not play by the rules in AI-related markets through enforcement action.

The DMCC Bill will also create a new pro-competition regime for digital markets, giving the CMA the ability to respond quickly and flexibly to the often rapid developments in these markets, including through setting targeted conduct requirements on firms found to have strategic market status (SMS) in respect of a digital activity.[footnote 16] AI and its deployment by firms will be relevant to the CMA’s selection of SMS candidates, particularly where AI is deployed in connection with other, more established activities.

The DMCC Bill will support the CMA to understand the effects of a designated firm’s algorithms on competition and consumers. As part of its investigatory tools for the digital markets regime, the Bill will enable the CMA to observe, and where appropriate, conduct tests on designated firms’ systems. This is a welcome step and will be essential to ensuring that the CMA can effectively address the risks associated with such systems.

How the CMA is working with others on AI issues

It is essential that we collaborate with other regulators to ensure consistency between our approaches. Although the CMA’s remit is competition and consumer protection, the use of AI encompasses other important policy areas such as data protection, security, copyright, and safety. We are acutely aware of these parallel issues and the importance of helping businesses navigate this cross-regulatory landscape and it is imperative that we are connected across policy development with other regulators.

The DRCF is improving coordination and cooperation between regulators on AI related issues in digital markets. The DRCF has just launched an AI and Digital Hub to assist UK tech firms to bring new products and services to market faster, contributing to UK economic growth.[footnote 17]

The DRCF AI and Digital Hub is a new, expert, informal advice service to provide help in one place, via the DRCF website, to innovators with complex, cross-regulatory questions. As part of the DRCF’s AI Programme, the CMA is undertaking joint research into consumer use, understanding and trust of generative AI, and collaborating on developing our understanding together on algorithmic processing, AI auditing and AI governance. The DRCF also shares insights and best practice with other UK regulators, including by hosting quarterly roundtables with 11 other non-member regulators. Effective collaboration between the DRCF and with others greatly benefits UK citizens and consumers. Further detail on the DRCF’s upcoming plans can be found in its 2024/25 Workplan.[footnote 18]

We are also working directly with the ICO on a joint statement on foundation models. This will support coherence for businesses and promote behaviours that benefit consumers where our remits interact.

UK digital regulation does not happen in a vacuum. Cooperation and coordination with our international counterparts is vital for our work. We are aware that many aspects of digital markets are international, and businesses are often navigating a complex international regulatory picture. AI-related issues are and will increasingly be at the heart of digital markets so we will continue to deepen our cooperation with our international counterparts on these issues.

AI has emerged as a key discussion topic in international networks. The CMA actively participates in the ICN, OECD (the Competition Committee and separately the Committee on Consumer Protection), G7 (AI Working Group), and ICPEN. This includes sharing insight from our work on FMs with international counterparts through a presentation at the OECD CCP meeting in October 2023. International fora like these offer great opportunities to bring together many competition and consumer authorities to share approaches, knowledge and expertise and we have aimed to influence the approach to common problems and issues – including on AI.

More broadly, stakeholder engagement is critical to our work. Engagement helps us understand what is happening in dynamic markets, and what businesses and consumers are worried about. It can help foster strong foundations for competition and consumer protection from the start, rather than us having to intervene down the track. That is why we regularly engage with a broad range of stakeholders, at both working and senior levels, in the UK and abroad.[footnote 19] We aim to continue this collaborative approach going forward.

We will continue to work with Government as it develops its pro-innovation approach to AI regulation. In our conversations with government and our fellow regulators, we have been exploring and will continue to explore how future policy or regulatory interventions might impact fair, open and effective competition. We see a risk of chilling effects on competition if interventions are so burdensome that only larger firms can comply and raise barriers to entry for smaller firms or for those with disruptive business models. With this in mind, we fully endorse the House of Lords Communications and Digital Committee recommendation that ‘market competition’ should be an explicit policy objective of the UK Government’s work on AI.[footnote 20] In line with our AI principles set out above, any policy interventions should not come at the expense of diversity and choice which are also critical to resilience. We will also be particularly mindful of the risks any interventions might pose to effective consumer protection.

The CMA’s next steps

The insight being developed through our work on AI is informing priority work taking place across the CMA.

As set out in our April 2024 FM update report, in light of the risks we have identified as part of our FM programme of work, we are:

  • Examining the conditions of competition in the provision of public cloud infrastructure services as part of our ongoing Cloud Market Investigation. The CMA’s independent group of panel experts appointed to conduct that market investigation is considering how the market for cloud services is operating in practice, including assessing the market positions of the main cloud service providers and the characteristics of customers. In addition to analysing a number of indicators, such as shares of supply and barriers to entry and expansion, the investigation will include a forward-looking assessment on the potential impact of FMs on how competition works in the provision of cloud services.

  • Monitoring current and emerging partnerships closely, especially where they relate to important inputs and involve firms with strong positions in their respective markets and FMs with leading capabilities. We will carefully consider current and future investments in, and partnerships with, leading FM developers and assess whether they could give rise to negative outcomes for competition and consumers.

  • Stepping up our use of merger control to examine whether such arrangements fall within the current rules and, if so, whether they give rise to competition concerns. It may be that some arrangements falling outside the rules are problematic even if not ultimately remediable through merger control. Equally some arrangements may not give rise to competition concerns. We consider it is appropriate to step up our review more generally so that we can start to identify more clearly and coherently which types of partnerships fall within the merger rules and the circumstances in which they may give rise to competition concerns. That will also be of benefit to the businesses involved.

  • As well as using our current range of legal powers, we will take account of developments in FM-related markets as part of our consideration of which digital activities to prioritise for investigation under new powers anticipated in the DMCC Bill. Areas of potential consideration could include those digital activities that are critical inputs for developing FMs such as compute.  Areas of potential consideration could also include those digital activities that are critical access points or routes to market for FM deployment, such as mobile ecosystems, search, and productivity software. However, we are yet to take any provisional decisions on which areas to prioritise for investigation and any designation would be subject to a prior investigation, and in considering compute we would take account of the findings of our ongoing Cloud Market Investigation.

In addition to the actions set out above to mitigate the risks AI may pose for competition and consumer protection, we will continue our dedicated programme of work to consider the impact of FMs on markets throughout 2024, including:

  • a forthcoming paper on AI accelerator chips, which will consider their role in the FM value chain

  • publishing joint research with the DRCF on consumers’ understanding and use of FM services

  • publishing a joint statement with the ICO on the interaction between competition, consumer protection and data protection in FMs

We will publish a further update on our FM-related work in Autumn 2024.

We are considering issuing proactive guidance to firms on how to comply with consumer law in AI-related markets if we see uncertainty or particular issues that need clarification, such as in relation to downstream AI integration in software and the use of consumer chatbots by firms.

We will also be supporting firms to safely innovate with AI as part of the ongoing DRCF AI and Digital Hub pilot.

Further, we will continue the digital transformation of the CMA itself and we will explore opportunities to further advance how we work through AI. We will be exploring a range of different tools and techniques, and we will take the opportunity to trial AI-tools safely and securely where they have the potential to enhance our efficiency and effectiveness.

We know that AI has the potential to be a transformative technology which will drive a huge range of benefits for UK businesses and consumers and that the CMA has a critical role to play in how AI-related markets develop. This is why we will continue to build our knowledge and act as a leader in this space and make sure the full benefits of AI are realised for UK businesses and consumers.

Alt text

Figure 1

This figure lists each of the CMA’s AI Principles:

(1) Access: Ongoing ready access to inputs. Access to AI data, compute, expertise and funding without undue restrictions. Continuing effective challenge to early movers from new entrants. Successful FM developers do not gain an entrenched and disproportionate  advantage by being the first to develop a FM, having economies of scale or benefitting from feedback loops. Powerful partnerships and integrated firms do not reduce others’ ability to compete.

(2) Diversity: There are a variety of models available for businesses and consumers to choose from that suit their needs, whether that be for a general purpose or a highly specialised task. Open-source models can help reduce barriers to entry and expansion. Both open and closed source models push the frontier of new capabilities. The market sustains a range of business models. Powerful partnerships and integrated firms do not reduce others’ ability to compete.

(3) Choice: Sufficient choice for businesses and consumers so they can decide how to use FMs. A range of deployment options, including in-house FM development, partnerships, APIs or plug-ins. Consumers and businesses can switch and/or use multiple services and are not locked into one provider or ecosystem. Services are interoperable and consumers and businesses can easily extract and port their data between services. Powerful partnerships and integrated firms do not reduce others’ ability to compete.

(4) Fair Dealing: No anti-competitive conduct. Confidence that the best products and services will win out, and that firms are playing by the rules. No anti-competitive conduct, including anti-competitive self-preferencing, tying or bundling. Vertical integration and partnerships are not used to insulate firms from competition. Competition can counteract any data feedback or first mover effects.

(5) Transparency: Consumers and businesses have the right information about the risks and limitations of FMs. People and businesses are informed of FMs’ use and limitations. Developers give deployers the right information to allow them to manage their responsibilities to consumers. Deployers provide the right information to users of FM-based services to allow them to make informed choices, including being clear when an FM-based service is being used.

(6) Accountability: FM developers and deployers are accountable for FM outputs. All firms take responsibility for ensuring they help foster the development of a competitive market that gains the trust and confidence of consumers and businesses. Developers and deployers take responsibility for what they control in the value chain and take positive action necessary to ensure consum

Figure 2

This figure shows which of the GAMMA firms are operating in each of the following parts of the FM value chain:

(1) Compute: Amazon, Google and Microsoft

(2) Data: Google, Meta and Microsoft

(3) FM development: Amazon, Apple, Google, Meta and Microsoft

(4) FM partnerships and agreements: Amazon, Google and Microsoft

(5) FM release: Amazon, Google and Microsoft

(6) Search: Google and Microsoft

(7) Social media: Meta

(8) Mobile ecosystems: Apple and Google (9) PC operating systems and productivity software: Apple, Google and Microsoft.

Figure 3

This figure lists each of the CMA’s AI Principles:

(1) Access: Ongoing ready access to inputs. Access to AI data, compute, expertise and funding without undue restrictions. Continuing effective challenge to early movers from new entrants. Successful FM developers do not gain an entrenched and disproportionate  advantage by being the first to develop a FM, having economies of scale or benefitting from feedback loops. Powerful partnerships and integrated firms do not reduce others’ ability to compete.

(2) Diversity: There are a variety of models available for businesses and consumers to choose from that suit their needs, whether that be for a general purpose or a highly specialised task. Open-source models can help reduce barriers to entry and expansion. Both open and closed source models push the frontier of new capabilities. The market sustains a range of business models. Powerful partnerships and integrated firms do not reduce others’ ability to compete.

(3) Choice: Sufficient choice for businesses and consumers so they can decide how to use FMs. A range of deployment options, including in-house FM development, partnerships, APIs or plug-ins. Consumers and businesses can switch and/or use multiple services and are not locked into one provider or ecosystem. Services are interoperable and consumers and businesses can easily extract and port their data between services. Powerful partnerships and integrated firms do not reduce others’ ability to compete.

(4) Fair Dealing: No anti-competitive conduct. Confidence that the best products and services will win out, and that firms are playing by the rules. No anti-competitive conduct, including anti-competitive self-preferencing, tying or bundling. Vertical integration and partnerships are not used to insulate firms from competition. Competition can counteract any data feedback or first mover effects.

(5) Transparency: Consumers and businesses have the right information about the risks and limitations of FMs. People and businesses are informed of FMs’ use and limitations. Developers give deployers the right information to allow them to manage their responsibilities to consumers. Deployers provide the right information to users of FM-based services to allow them to make informed choices, including being clear when an FM-based service is being used.

(6) Accountability: FM developers and deployers are accountable for FM outputs. All firms take responsibility for ensuring they help foster the development of a competitive market that gains the trust and confidence of consumers and businesses. Developers and deployers take responsibility for what they control in the value chain and take positive action necessary to ensure consum

  1. Algorithms: How they can reduce competition and harm consumers - GOV.UK (www.gov.uk) 

  2. FMs are large, machine learning models trained on vast amounts of data – which have developed rapidly in recent years. 

  3. AI Foundation Models: Initial report; AI Foundation Models: Update paper - GOV.UK (www.gov.uk) 

  4. Figure 1 is from the market landscape section of our recent technical update paper on foundation models. 

  5. Pricing algorithms research, collusion and personalised pricing - GOV.UK (www.gov.uk) 

  6. These risks are more comprehensively set out in Algorithms: How they can reduce competition and harm consumers - GOV.UK (www.gov.uk)

  7. Figure 2 is from the market landscape section of our recent technical update paper on foundation models. 

  8. As an illustration, see the recent case with Air Canada’s AI chatbot giving incorrect information that resulted in consumer harm. Airline held liable for its chatbot giving passenger bad advice - what this means for travellers - BBC Travel 

  9. See section 5 of our September 2023 report of AI Foundation Models: Initial report - GOV.UK (www.gov.uk), as well as p. 21 of our April 2024 update paper. The CMA has also considered the effectiveness (or otherwise) of firms’ AI systems to filter and remove content that may be harmful to consumers, for instance, in relation to fake online reviews (see also section 2.4 of Algorithms: how they can reduce competition and harm consumers). 

  10. For more on personalisation harms, see sections 2.1.1 to 2.1.3 of Algorithms: how they can reduce competition and harm consumers

  11. The CMA’s Behavioural Hub (BH), positioned within the (DaTA unit, in 2021 published a discussion paper (Online Choice Architecture - How digital design can harm competition and consumers - discussion paper) and an evidence review paper (Evidence Review of Online Choice Architecture and Consumer and Competition Harm on how online choice architecture can harm competition and consumers; and in 2023, the CMA published a joint position paper with the Information Commissioner’s Office (ICO) on how choice architecture practices can undermine consumer choice and control over personal information. 

  12. Trends in Digital Markets: a CMA horizon scanning report - GOV.UK (www.gov.uk) 

  13. Members include the CMA, the Financial Conduct Authority (FCA), the ICO and the Office of Communications (Ofcom) 

  14. Principally the Competition Act 1998 and the Enterprise Act 2002. 

  15. The Government’s cross-sectoral principles published in its AI White Paper are safety, security and robustness, transparency and explainability, fairness, accountability and governance, contestability and redress. 

  16. To designate a firm with SMS, the DMCC Bill requires the CMA to establish that a firm has substantial and entrenched market power and a strategic position in relation to a digital activity in the UK. 

  17. AI and Digital Hub - DRCF 

  18. The DRCF publishes its 2024/25 Workplan - DRCF 

  19. In our AI Foundation Models review we have engaged stakeholders including: consumer groups and civil society representatives; FM developers and major deployers of FMs; innovators, challengers and new entrants; academics and other experts; Government; fellow regulators, in the UK including via the Digital Regulators Cooperation Forum, and further afield with our international counterparts 

  20. Large language models and generative AI (parliament.uk)