Agentic AI and consumers
Published 9 March 2026
Executive summary
Current state
AI is now embedded in many aspects of everyday life. Consumers already experience and interact with AI through search, recommendations, fraud detection, customer service and decision‑support tools that can save time and improve access to information. The rapid spread of generative AI – enabling natural language interaction – has accelerated this trend, bringing AI into direct, large‑scale engagement with consumers.
To date, however, AI adoption and its impact have been uneven and most consumer‑facing AI has operated as a tool: it supports decisions, while coordination, monitoring and action remain with the user.
Potential future state
Agentic AI could drive a step change in how people use AI and its impact on their lives
Definitions vary but these include AI agents that can be instructed in natural language to achieve a goal autonomously, navigating some complexity in the environment, planning, coordinating, and taking actions – potentially across multiple services.
AI agents do not merely assist, they sense (perceive their environment), decide and act[footnote 1]. They go beyond generating responses to user queries and may:
-
assess goals, break them into subtasks, and plan end-to-end workflows
-
retrieve real-time data (that may include personal data) from other agents, databases and other services
-
execute actions autonomously, such as making payments on behalf of the user
-
store memory of past interactions to improve over time[footnote 2]
For businesses, this could unlock substantial productivity gains. For consumers, today’s chatbots may prove only a first step towards more capable personal agents – systems that anticipate needs and execute transactions on the user’s behalf.
If realised reliably at scale, this shift – from using tools to delegating outcomes – could materially change how people engage with markets and how value is created. The potential benefits for consumers are significant if the technology achieves reliability and is deployed responsibly. Agentic AI could reduce friction, improve personalisation and support better outcomes including potentially lower prices and tailored deals, including in complex markets.
By automating optimisation and follow‑through, AI agents could save people time, and reduce cognitive load, and potentially help consumers who face high engagement costs (including vulnerable consumers) participate in markets more effectively.
If all this drives stronger confidence and demand in consumer markets, there may be new opportunities for innovative businesses to enter and grow including new avenues for UK businesses to bring agentic apps and services to market.
At the same time, there are material risks. Greater autonomy for agents increases the consequences of errors, may heighten risks of manipulation and loss of consumer agency, and could lead to worse overall outcomes for consumers. People may be steered towards products and services that are more profitable but less suited to their needs, potentially paying higher prices. AI agents raise new questions about transparency, incentives and accountability and whether the current tools and frameworks that protect consumers are fit for purpose.
Without appropriate safeguards, agentic systems could undermine trust in AI and consumer markets rather than strengthen it, and this loss of trust and confidence in turn could inhibit positive innovation, investment and growth.
Direction of travel
The technology and its deployment are at an early stage. Most implementations are relatively bounded and cautious, particularly in consumer‑facing contexts. Even so, interest and investment have risen sharply, driven by advances in models, falling deployment costs and early evidence of efficiency gains. Progress will depend on real‑world performance and on whether businesses and consumers develop sustained confidence in agentic systems.
Application of consumer law
UK consumer law applies whether decisions are made by people or by AI. The CMA’s foundation model principles – particularly transparency and accountability – remain directly relevant, and the CMA has published guidance to help businesses using agentic AI to comply with consumer law. Businesses exploring the technology should focus on robust training of systems, monitoring, and refinement, supported by appropriate human oversight.
Realising the full potential of agentic AI will also depend on wider enablers such as smart data schemes, secure digital identity and strong interoperability standards – enabling consumers to adopt with confidence, switch between systems and exercise choice. The UK has an opportunity to position itself at the forefront of trusted agentic innovation, fostering a dynamic, competitive ecosystem that drives household prosperity, innovation, and growth.
From tools to agents
AI has increasingly become part of everyday life over the past decade, shaping how people communicate, access information, consume services and make decisions. The recent emergence of powerful foundation models and generative AI has accelerated this shift, bringing AI decisively into the foreground. Tools such as ChatGPT were adopted rapidly, and millions of people now interact with AI systems directly – conversing in natural language, synthesising information and recommendations to support a wide range of decisions, and generating text, images and other content at scale.
For most consumers, however, AI has so far been experienced largely as a support and input into decisions they may take rather than as an actor. Search engines retrieve information, generative search provides direct answers, recommendation systems suggest options, and chat‑based assistants respond to prompts. These technologies can be highly effective, and may have become a valued source of insight and counsel for some, but they remain essentially reactive: they assist consumers, while leaving the burden of decision‑making, coordination and follow‑through with the user.
Interest has recently spiked in agentic AI, driven by advances in foundation and generative models, falling deployment and experimentation costs, and early evidence that AI systems can now plan and execute multi‑step tasks, at least in bounded settings – such as customer operations and commerce workflows – rather than merely respond to prompts. This has reinforced expectations that agent‑based systems could unlock significant productivity gains – by reducing friction, automating coordination, and enabling more continuous optimisation across complex activities. If realised reliably at scale, this would represent a qualitative shift in how consumers interact with AI and the businesses that use it, and how value is created.
Instead of responding to individual queries, the concept is that agentic systems are given higher‑level objectives and allowed to pursue them over time. They are designed to break goals into steps, adapt to changing circumstances, and act across multiple services on a user’s behalf – monitoring options, initiating actions and, in some cases, executing transactions. In effect, consumers could potentially move from using tools to delegating outcomes.
What makes AI ’agentic‘ is not a single technical feature, but a combination of capabilities that together raise both the potential value and the stakes. These include a degree of autonomy from continuous human supervision; goal‑orientation, where systems pursue outcomes rather than isolated tasks; multi‑step reasoning across complex decisions; and potentially the ability to act across systems, platforms and data sources. AI agents sense (perceive their environment), decide and act[footnote 1], and this distinguishes them from traditional automation, which follows predefined rules, and from today’s chatbots, which primarily generate responses rather than decide what to do next and get it done.
These characteristics may allow agents to operate continuously and adaptively, rather than episodically. Unlike traditional automation, which executes predefined processes, agentic AI may make context‑dependent judgements about when to act, when to seek confirmation, and how to balance competing objectives. This flexibility underpins the potential for meaningful consumer and productivity benefits if agentic systems advance further and come onstream widely – it also reinforces the need for transparency around performance and clear accountability as autonomy increases.
Agentic AI today and potential future opportunities
What we are seeing today
AI agents that can plan and act with a degree of autonomy are already being deployed by businesses, but primarily in bounded and controlled ways. Deployment is concentrated in domains where scope and oversight can be tightly managed, including customer operations and service, commerce and sales workflows, software and IT operations and internal business process automation. In these settings, agents are used to progress multi step tasks – such as handling customer service requests, processing refunds or coordinating transactions – rather than simply providing information. Consumer facing authority remains limited and escalation to humans is common. Deployment in high stakes or fully consumer autonomous contexts remains limited[footnote 3].
Where consumers encounter agentic behaviour today, it is typically through narrowly scoped applications, such as AI assisted customer service agents or early shopping agents that can search, compare and initiate simple actions with user confirmation. Overall, businesses are actively experimenting with AI agents, but some evidence suggests deployments to date focus more on specific workflows rather than on offering end-to-end autonomous decision making on consumers’ behalf.
How might things develop: longer term promise with considerable uncertainty
In the near term, agentic AI may become more visible to consumers as systems move beyond single tasks towards goal oriented assistance. Plausible developments include forms of agentic commerce – where agents monitor prices, availability or contract terms over time and initiate defined actions – and more integrated personal assistants that coordinate activity across multiple services on a consumer’s behalf within clear boundaries.
Looking further into the future, there is the possibility that agentic AI could enable more fully autonomous personal agents capable of long term goal management, continuous learning of user context and preferences, and action across entire ecosystems. These may serve as persistent personal agents that manage longer term consumer objectives – such as optimising household services or managing finances – and act across multiple platforms and markets. If realised, this could represent a significant shift in how consumers engage with businesses, with agents increasingly acting as intermediaries rather than tools.
These longer-term possibilities remain highly uncertain and some industry commentary has emphasised that a shift to fully autonomous consumer agents will depend on advances in reliability, coordination and real world performance, and that many current agentic initiatives may be delayed, re-scoped or abandoned as organisations test what works in practice.
Taken together, the evidence points to a staged transition. AI agents are already being deployed in bounded business settings, while businesses are making significant investments in agent based technologies in anticipation of future productivity gains and competitive advantage.
How agentic AI could impact consumer lives
Significant time savings, reduced cognitive load, potentially better deals
Many everyday consumer activities and decisions – identifying relevant offers and good deals, switching providers, understanding tariffs, managing subscriptions, resolving complaints – are time consuming and cognitively demanding. These frictions disproportionately affect households with less time, confidence or capacity to engage, contributing to poorer outcomes (especially for vulnerable consumers) and potentially reduced trust in markets.
Agentic AI could help address these frictions by continuously monitoring options for consumers, identifying opportunities, and acting within clearly defined consumer preferences and constraints to secure good deals and potentially better prices.
Rather than requiring repeated manual engagement and decision making, optimisation could become an ongoing background process.
At scale, subject to effective interoperability and consumer uptake, these time and effort savings could support increased household prosperity and more active consumer engagement across the economy. There may be wider benefits from freeing up time for work, care, and leisure.
Hyper‑personalisation and proactive support
Agentic systems may also enable more persistent and context‑aware personalisation, remembering preferences, past behaviour and constraints to provide proactive support – for example by helping consumers match to desired products and services, flagging unused subscriptions, alerting them before prices rise, or prompting action before a problem escalates.
The potential promise from all of this is a change in consumer empowerment and better consumer outcomes including potentially more tailored offers and better pricing and deals.
Changing consumer behaviour and engagement
If and when agentic systems become more capable, consumers may increasingly delegate tasks, trust AI with sensitive data or financial authority, and move from ‘using apps’ to managing outcomes.
Risks, challenges and trust
While the potential benefits of agentic AI are significant, the technology also raises important risks, many of which build on existing evidence about AI‑enabled systems but may become more acute as autonomy increases. Managing these will be crucial to adoption at scale.
Risk that the agent is not a ‘faithful servant’
People will need to be able to trust that AI agents will act in accordance with their interests and that they are not being steered or manipulated in ways that lead to worse personal outcomes. Hyper-personalisation and adaptive behaviour within agents may heighten the risk of manipulative design practices (harmful choice architecture or `dark patterns’), especially where agents optimise for engagement, conversion, or other commercial objectives. More granular and persistent personalisation could make any steering less visible to consumers, weaken informed choice, and increase the risk of dark patterns being deployed at scale.
Errors and reliability issues
Agents (even faithful ones) may be susceptible to errors. For instance, large language models (LLM) may ‘hallucinate’ and fabricate incorrect information. When agents act autonomously, any errors in performance could have costly real‑world consequences, particularly where actions involve financial decisions, contractual changes, or service disruption.
Industry analysis highlights that current agentic systems can still face limitations in robustness, coordination, and real‑world performance, reinforcing the need for careful scoping, testing, human oversight and accountability as autonomy increases.
Bias and discriminatory outcomes
As with other AI‑enabled systems, agentic AI may amplify existing biases in data or decision‑making processes, particularly where outcomes emerge from complex, multi‑step reasoning that is difficult to observe or explain. Opaque decision making can make it harder for consumers to understand, challenge or seek redress for unfair outcomes, increasing risks under consumer protection and equality frameworks.
Loss of agency and over‑reliance
As consumers increasingly delegate tasks to AI agents, there is a risk of over‑reliance, where users defer too readily to automated decisions and become less able to scrutinise or intervene over time. Sustained delegation may weaken consumers’ ability to detect errors or misalignment with their preferences unless systems are designed with clear boundaries, prompts and override mechanisms.
AI pricing and the risk of ‘agentic collusion’
The use of algorithms and AI in pricing is already widespread and can deliver significant benefits, including faster responses to demand, lower costs, and more efficient matching of supply and demand.
However, the CMA has previously highlighted that algorithmic pricing can also increase the risk of more coordinated market outcomes, even in the absence of explicit communication between businesses, particularly where algorithms learn from and react to each other in concentrated markets.
Agentic AI could intensify these risks. Where multiple businesses deploy autonomous agents that optimise pricing or commercial strategies, there is a risk that interaction between these systems could dampen competitive pressure. We have discussed this topic and considerations for businesses elsewhere, including in our latest blog post on AI and collusion.
Businesses remain responsible for the outcomes of pricing and commercial decisions shaped by AI systems, and must take proactive steps to understand, test and govern the technologies they deploy. As AI systems become more autonomous, ensuring that they are designed, monitored and constrained in ways that preserve effective competition will be critical to protecting consumers and supporting dynamic, innovative markets.
Lock‑in and reduced choice
Where agentic systems operate within closed ecosystems, consumers may find it difficult to switch providers or move their data, preferences or agent ‘memory’. Limited interoperability and data mobility could weaken competitive pressure over time and entrench incumbency, reducing consumer choice and market dynamism.
Addressing these risks is critical not only for consumer protection but also for sustained adoption, investment and growth. Trust is a competitive asset: markets where consumers feel confident engaging are more likely to support innovation and productivity over the long term.
Data protection, privacy and security
Agentic systems can also rely on access to personal data and delegated authority to act on a user’s behalf, increasing the importance of strong safeguards around lawful processing, consent, security and authentication. The UK Information Commissioner’s Office has highlighted that agentic AI raises specific challenges around accountability, risk management and privacy-by-design, reinforcing the need for careful governance as these systems develop.
UK consumer protection law and technology: existing precedent
UK consumer protection law has long applied to businesses’ use of technology and digital design, and recent enforcement shows that it is the effects on consumer decision‑making, not the novelty of the technology, that matter most. Under the UK’s consumer protection framework – now strengthened by the Digital Markets, Competition and Consumers Act 2024 – businesses must not mislead, manipulate or exert undue pressure on consumers, regardless of whether those outcomes are driven by human decisions, algorithms, or interface design.
This approach has been clearly illustrated in the CMA’s work on online choice architecture, where digital interfaces that deploy misleading features such as false urgency claims have been tackled. Recent investigations demonstrate that the law already reaches beyond what businesses say to how systems are designed to influence behaviour.
As consumer interactions increasingly involve AI driven and agentic systems – capable not just of presenting options but potentially of acting on a consumer’s behalf – the same consumer protection principles apply. For instance, if an AI agent steers, pressures or misleads consumers in ways that harm their economic interests this is likely to be unlawful.
Businesses that embed consumer protection principles into the design of agentic systems will be best placed to build trust, scale responsibly, and compete on the quality and reliability of outcomes delivered to consumers.
Steps businesses exploring agentic AI should take to protect consumers and build trust
Businesses exploring AI approaches including agentic AI, should ensure they comply with consumer law and competition law.
A central principle remains unchanged: businesses are responsible for how they engage with consumers, regardless of whether that is through people or AI systems. And consumer law requires you to treat your customers fairly.
Main considerations include:
- training agentic systems to reflect the requirements of consumer law (and competition law), for example to respect consumers’ statutory and contractual rights
- monitoring real‑world performance, including errors, bias, complaints and unintended outcomes – with regular human oversight
- refining systems quickly when issues are identified – particularly where there might be significant impacts
The CMA’s initial consumer protection guidance for businesses covers these main areas in more detail, and our case study below unpacks an illustrative example.
Read also the CMA’s blog posts on Pricing algorithms and competition law and AI and collusion for additional considerations for businesses using AI, including agentic approaches, to set prices.
Taken together, these considerations imply viewing agentic AI deployment not as a one off technical exercise but as an ongoing operational and governance priority and competitive advantage. Businesses that invest early in robust oversight and consumer centred design are more likely to realise long term benefits, avoid costly remediation, and create impactful innovation and competitive advantage.
Our insight and guidance for businesses in this space builds on the CMA’s earlier work on foundation models, which outlined a set of principles for positive development of the AI ecosystem – access, diversity, choice, fair dealing, transparency, and accountability. These principles remain directly relevant as agentic approaches emerge – including ensuring sufficient transparency for consumers about the performance of systems including ensuring sufficient transparency for consumers about the performance of systems including their strengths and limitations, and remaining accountable.
Wider enablers for positive market development
Realising the consumer benefits of agentic AI will not depend solely on the design choices of individual businesses. It will also hinge on a set of wider, cross‑economy enablers that shape how agents operate across markets, how easily consumers can switch or contest outcomes, and whether trust can be sustained at scale.
One such enabler is data mobility. Agentic systems are only as effective as the information they can access, yet today consumer data remains fragmented across businesses, sectors and formats. Smart data initiatives have the potential to reduce this friction, allowing agents to act on up to date, user authorised information and to operate across providers without repeated manual intervention. From a competition perspective, portability is not simply a convenience: it is a structural condition for dynamic markets. If consumers can’t give their agents direct access to their data (including preferences, broader context) or if their agents can’t move across ecosystems, easily, there will be a risk that they experience `lock-in’ to incumbent agentic ecosystems rather than able to switch around freely, and that this undermines competition and the potential for agentic innovation to drive household prosperity and growth.
Closely linked is the development of secure digital identity and authentication infrastructure. As agents begin to transact, contract and make changes on behalf of users, reliable mechanisms for verifying identity and authority to act will become essential. Weak or fragmented identity systems would raise risks of fraud, error and dispute, undermining confidence in delegation. By contrast, robust and interoperable identity frameworks can support safer automation, clearer accountability and more effective redress when things go wrong. The UK’s Digital Identity Scheme could become critical infrastructure in this context and, together with digital wallet initiatives could help provide a powerful foundation for trusted agentic AI in the UK.
Interoperability standards will also play a decisive role. Agentic AI derives much of its value from operating across platforms, services and sectors. If agents are constrained within closed ecosystems, consumers may find that optimisation works well only inside a single business’s environment, limiting real world benefits and dampening competition. Open technical standards, shared protocols and common approaches to permissions and logging can help ensure that markets remain contestable and that innovation occurs at the level of services, rather than control over the agent layer itself.
The CMA’s work and capability building in AI and agentic systems
The CMA is taking a forward looking, pro-innovation approach to AI and agentic systems, focused on understanding how the technology is developing, how it may affect consumers and competition, and how to support the positive evolution of a trusted, competitive ecosystem. Our work combines horizon scanning, engagement with businesses, investors, consumer groups, and external experts, and analysis of emerging use cases to assess both the potential benefits and the risks as autonomy increases, as well as wider enablers for positive development of the ecosystem.
As businesses adopt data rich, algorithmic and AI and agentic driven systems, we must understand these technologies and unpack the implications for consumers, competition, investment and productivity, infusing this insight across our work to helping ensure the opportunities they pose are captured for UK household prosperity and growth.
At the same time, we must harness all the same capabilities internally at the CMA to upgrade our own operating model – they represent a significant opportunity to increase own productivity, agility and pace and this work is well underway.
Both requires expertise inhouse and we have invested in deep multidisciplinary capabilities – including data science and AI, data engineering, behavioural insight, technology insight, and strategic business analysis. This gives us the capability to analyse algorithmic systems and design proportionate pro-growth interventions at pace, where needed.
We published research on pricing algorithms in 2018 and a broader piece on algorithms in 2021, noting that algorithmic systems were already integral to many businesses and a growing element of how they operate. It was our tech horizon scanning function that identified AI foundation models in 2022 as a particularly striking emerging technology – and this allowed us early on, to start investing in the expertise and capabilities to help the UK capitalise on the opportunity when it arrived. We engaged extensively across the ecosystem here in the UK and abroad, spoke to businesses developing and deploying models, small and large, investors, consumer groups, academic experts and others.
This is a fast-paced, highly innovative market, and we don’t believe that heavy-handed, sweeping regulation is right for the UK – it could stymie innovation and stunt growth. Instead, our focus has been on understanding developments, potential benefits and risks, identifying important uncertainties and drivers, and we published 2 in-depth reports and a set of AI principles to guide the market towards positive outcomes. We continue to monitor and unpack developments, injecting this insight and understanding across the breadth of our work
And using our technical capability to drive the CMA’s digital transformation and capture new opportunities to be digital and data-driven remains a major priority. To mention just one further example; we’re actively exploring how we can use our data and AI expertise alongside our deep cartels expertise to evolve new approaches to screening for cartels, including helping the UK government detect bid rigging in public procurement – an enormous opportunity to drive efficiency and savings across the wider public sector.
Finally, we do not operate in a vacuum – engaging and sharing knowledge with others is vital especially in complex areas like digital and AI. We benefit from strong links with leading external experts ensuring our insight and capability remains cutting edge. We believe in strong regulatory join-up and support for positive innovation. In the UK, we are active in the Digital Regulation Cooperation Forum (DRCF) – a forum to promote regulatory cooperation and coherence. We have looked together at Online Choice Architecture (OCA) issues and have an active programme of work around AI, including comprehensive joint work and a forthcoming DRCF report on agentic AI. International engagement is also vital, including with our agency counterparts globally. We are active in the International Competition Network (ICN) and currently chair the ICN’s Technologist Group which brings together Chief Technologists and their teams from across the world’s competition agencies. We also participate in the International Consumer Protection and Enforcement Network (ICPEN), exchanging insights in the digital and technology space, and are active in the OECD, the G7 and other important global fora.
And, of course, strong engagement with all stakeholders including businesses and investors is critical to our work. We have invested in the build out of expertise in strategic business analysis capabilities – ensuring the CMA understands real-world business models and strategies of businesses and investors – and engage proactively to support innovators and help advance UK growth and household prosperity, aligned to the UK’s AI Opportunities Action Plan and Modern Industrial Strategy.
Conclusion: shaping an agentic future consumers can trust
Agentic AI has the potential to reshape how consumers interact with markets, delivering meaningful benefits for households and supporting productivity, innovation and growth across the economy. By moving from tools that assist decisions to systems that can act on consumers’ behalf, agentic AI could significantly reduce friction, lower cognitive burdens, and help consumers achieve better outcomes in complex markets.
But greater autonomy also brings greater responsibility. As with previous technological transitions, the long term impact of agentic AI will depend not only on how the technology develops, but on how it is designed, governed and integrated into markets and everyday life. Reliability, transparency and accountability will be critical as delegation increases and the consequences of error, manipulation or misalignment become more significant.
The UK’s consumer protection regime, sitting alongside its competition laws, together form a framework designed to ensure fair play, innovation, and consumer protection and already provide a strong foundation for addressing these challenges. Existing principles – grounded in fairness, transparency and accountability – apply regardless of whether consumers are interacting with people or AI systems. Businesses that embed these principles into the design and deployment of agentic AI will be best placed to build trust, scale responsibly and compete on the quality of outcomes delivered to consumers.
Trust is critical infrastructure for adoption, investment and growth. By ensuring that agentic systems are developed and deployed in ways consumers can understand and rely on – supported by effective oversight, redress and potentially wider enablers such as data mobility and digital identity – the UK has an opportunity to position itself at the forefront of trusted agentic innovation, delivering lasting benefits for households, businesses, and the wider economy.
Illustrative case study: the personal shopping and finance agent
A consumer signs up to a service to manage shopping, subscriptions, and everyday finances across multiple platforms. The business providing the service uses an AI agent to monitor prices, negotiate offers, switch providers, and execute purchases within user-defined constraints.
The agent:
- tracks spending patterns and preferences
- monitors subscriptions and flags unused services
- automatically searches for better deals on utilities, insurance, and broadband
- executes approved actions (for example switching providers, or cancelling subscriptions)
- alerts the users before high-impact or irreversible decisions
How the consumer benefits:
- reduces financial ‘leakage’ and cognitive load
- saves time on comparison and negotiation – and potentially does both more effectively
- benefits from accessible ‘on tap’ optimisation across shopping and finances
Consumer law risks and challenges (illustrative):
- lack of transparency about any limitations (for example, the agent only has access to certain data, or can only carries out limited searches in relation to certain types of service)
- erroneous switching decisions leading to poor outcomes (for example, choices which don’t reflect the consumers preferences)
- hidden incentives or biased recommendations
Safeguards and enablers (illustrative):
- transparency about the use of AI agents as part of the service
- training the agent to clearly disclose important information (for example, limitations, incentives and affiliations)
- ensuring the agent gets mandatory confirmation for high-risk actions
- monitoring performance, feedback and complaints, with human oversight and clear processes to escalate and resolve issues
- strong audit logs and risk assessments – demonstrating swift action to address problems
- clear accountability including liability if agent acts outside customer instructions
Agentic AI will deliver greatest consumer value and be trusted when autonomy is bounded clearly by user intent and backed by strong transparency and accountability.
-
Russell, S., & Norvig, P. (1995). Artificial Intelligence: A Modern Approach. Prentice-Hall. ↩ ↩2
-
CMS LawNow, Agentic AI and the EU AI Act, April 2025. ↩
-
For example, IBM 2026 Consumer Research Study – Own the Agentic Commerce Experience. ↩