Consultation outcome

Online Advertising Programme consultation

Updated 25 July 2023

This consultation document was published on 9 March 2022.

Ministerial foreword

Julia Lopez MP

Advertising is at the heart of the digital economy. It is a key revenue source for many online businesses, underpinning the provision of key services that are positively transforming people’s lives and helping consumers discover valuable new products and services at remarkable speed. The UK advertising sector is leading the way, creating more accessible and low cost routes for businesses to engage with customers, and continues to be the largest market in Europe for advertising.

As digital technologies continue to underpin our economy, society and daily lives, we have committed to ensuring that the rules governing them keep pace, while driving sustainable growth and unlocking innovation. These technologies have served as an incredible catalyst for change - not least during the COVID-19 pandemic where the public has relied on them more than ever before - but also in the way that they boost prosperity and productivity across the economy.

The advertising landscape is dynamic and increasingly focused on digital channels. Regulating online advertising, and the internet, is challenging because of the speed with which it changes. Online advertising remains the fastest growing area of advertising globally, having grown exponentially over the last decade. While this brings significant benefits to businesses and consumers, it also carries risks and potentially damaging consequences, including illegal fraudulent adverts and legal but harmful adverts such as those which mislead and target vulnerable groups.

The Online Advertising Programme will review the regulatory framework of paid-for online advertising to tackle the evident lack of transparency and accountability across the whole supply chain. It will consider how we can build on the existing self-regulatory framework, by strengthening the mechanisms currently in place and those being developed, to equip our regulators to meet the challenges of the online sphere, whilst maintaining this government’s pro-innovation and proportionate approach to digital regulation. We want to ensure that regulators have good sight of what is happening across the vast, complex, often opaque and automated supply chain, where highly personalised adverts are being delivered at speed and scale.

This review will work in conjunction with the measures being introduced through the forthcoming Online Safety Bill, as well as those this government is developing to address competition and data protection issues across the online landscape. Seeking to complement the online safety legislation being implemented to regulate user-generated content, the Online Advertising Programme will look specifically at paid-for online advertising to ensure holistic cover across the online content that can create harm for consumers and businesses alike. While the Online Safety Bill will introduce a standalone measure for in-scope services to tackle the urgent issue of fraudulent advertising, this Programme will ensure that other organisations across the supply chain play a role in reducing the prevalence and impact of fraud, in addition to the spectrum of wider illegal and legal harms created by online advertising.

Market participants across the online advertising ecosystem, from advertisers to publishers and all those in between, have a collective responsibility to tackle the harms created to our society, particularly young and vulnerable people. The government wants to move away from a model which focuses on holding advertisers accountable, to deliver a holistic, cross-sector approach which looks at the roles of each actor in the ecosystem and how they can facilitate the minimisation of harm. Our goal is to unpick the intricacies in the market to develop a regime that enables each actor to contribute to this objective, and ultimately create a more sustainable system which people can trust.

This consultation presents an opportunity to build on the UK’s leading position in the international digital market and make our creative industries as globally competitive as possible.

Julia Lopez MP
Minister of State for Media, Data and Digital Infrastructure

Executive summary

In 2019, the Department for Digital, Culture, Media and Sport announced that it would consider how online advertising is regulated in the UK. In 2020, we ran a call for evidence focusing on online content and placement standards, before carrying out a formal consultation. The purpose of this consultation on the government’s Online Advertising Programme (OAP) is to set out the government’s understanding of the online advertising ecosystem and highlight some of the priority areas of concern. The document also sets out a number of options for reform that the government is considering, and on which we would be keen to hear your views.

This government is pro-innovation and recognises that digital technologies play a significant role in our economy and society. However, as they underpin more of our daily lives, we need to ensure that appropriate governance is put in place that keeps pace with the speed of growth across digital markets. The digital landscape has grown relatively organically over the past decade or so, in which time online advertising has transformed the way in which most adverts are disseminated. Through the OAP, the government seeks to review the overall regulatory architecture of online advertising to ensure that regulators have the right powers and tools at their disposal to regulate and minimise existing and arising harms. This is consistent with the approach that the government is taking in a number of other related areas such as online safety, cyber security, data policy and digital competition.

This Online Advertising Programme intends to complement the government’s work to establish a pro-competition regime for digital markets. There will also be significant interactions between the Online Advertising Programme and the government’s Online Safety Bill in relation to tackling fraudulent paid-for advertising. The OAP plans to build on this work by seeking to provide a holistic review of the whole ecosystem for online advertising, examining the role of actors across the supply chain in creating a transparent and accountable market. We will continually examine the interdependencies and overlaps between this review and other regulatory initiatives across government and industry to ensure consistency and coherence in our approach, in line with the government’s Plan for Digital Regulation published in July 2021.

Online advertising has emerged rapidly and has come to dominate advertising globally, creating opportunities not only for businesses but also for consumers. Through this evolution, online advertising has also come to play a critical role in the monetisation of the internet and the ability of consumers to access myriad online services - from search to social media to online news provision - for free. It has fundamentally redrawn how businesses and consumers interact and is a key driver in the digital economy, generating vast flows of revenue. In this role, it provides affordable, scalable and targeted advertising for relatively little investment, enabling small and independent businesses throughout the UK to find audiences cheaply and efficiently.

Rapid growth has, however, presented challenges for the regulation that governs online advertising. As a result, a spectrum of both legal and illegal harms have arisen which are impacting on the trust in and sustainability of the market. These current, new and emerging harms have wide-reaching impacts, which have in turn reduced consumer confidence in online advertising. A recent study has shown that trust in advertising has fallen dramatically over the last few decades, with 70% of people in the UK having stated that they do not trust a lot of what they see on social platforms, including posts from brands. This lack of trust in turn threatens to undermine the overall sustainability of the market.

The harms that we have identified can broadly be divided into harmful content of adverts and harmful placement or targeting of adverts - and this harm can be further exacerbated when harmful content is targeted at vulnerable groups. Although targeting can bring benefits for both businesses and consumers as more relevant advertising is served, it also carries risks and highlights concerns regarding how data is being used. Responses to the 2020 call for evidence showed that consumers are finding some advertising practices intrusive, with some feeling that they are ‘stalked’ around the internet. Consumer groups such as Which? and Money Mental Health have also highlighted that targeting can hone in on vulnerable individuals based on algorithmic technologies.

The prevalence of harmful content and targeting is driven by the complexity of online advertising market dynamics, including the presence of two core factors: a lack of transparency and a lack of accountability. Transparency issues are driven by the opaque, complex and automated nature of the online supply chain, which, alongside the scale, speed and targeted nature of online advertising dissemination, makes it difficult for businesses, the public or regulators to understand who is viewing which type of advert, when, and with what impact. This is compounded by the issue of accountability in the online supply chain, where many of the actors involved in distributing advertising content are not consistently held to account under existing regulatory frameworks.

Platforms and intermediaries generally have their own governing principles, terms of service and community guidelines. While they do have certain obligations under consumer law - for example, under the Consumer Protection from Unfair Trading Regulations 2008 (CPRs) - there are often limited obligations on them to share information relating to monitoring, performance and propriety, and these are not standardised practice across industry. Whilst recognising the advancements made by those actors across the ecosystem who strive to achieve better industry standards, community accreditations and guidelines often do not have full industry participation and are not independently verified or enforced by an external regulatory body. Many larger platforms also offer ‘self-service’ advertising buying services, which can require little vetting for advertisers and create a low bar to entry for new players in the market. As a result, bad actors often operate with relative impunity, using online advertising as a means to perpetrate fraud or advertise other illegal or legal but harmful products and services, with limited oversight.

At present, online advertising is not subject to the same level of regulation as other media such as TV and radio. Whereas in broadcast advertising, licences can be revoked where there are serious breaches there are no equivalent sanctions for those that host harmful content online. The vast majority of TV and radio adverts are also pre-cleared before they are broadcast, whereas for online advertising the absence of a broadly equivalent body means that harmful adverts may be served before they have a chance of being rejected.

The Advertising Standards Authority (ASA) has taken on an important role in developing innovative approaches to the regulation of advertising, which for the majority of advertising online holds advertisers primarily responsible for the creative content, media placement and audience targeting of their ads. The ASA also places responsibilities on others involved in preparing or publishing marketing communications, such as agencies, publishers and other service suppliers, to comply with their UK Code of Non-Broadcast Advertising and Direct & Promotional Marketing (the ‘CAP Code’). This secondary responsibility recognises that whilst parties involved in preparing or publishing adverts have a role to play in tackling irresponsible adverts, there are limited circumstances in which online service providers are held by the ASA to exercise primary control over the creative content and audience targeting of adverts. It is clear, however, that transparency and accountability needs to be spread across the supply chain, so that intermediaries, platforms, and publishers in the open display market can also play a greater role in the regulation of online advertising, which helps monetise their services.

We have outlined a number of options for the level of regulatory oversight that could be applied across the supply chain in order to bind in other actors and ensure that we improve transparency and accountability in the ecosystem. The level of regulatory oversight that could be implemented ranges from a continuation of the self-regulatory framework through to full statutory regulation. A self-regulatory approach would involve relying on the ASA’s existing framework enforced through their codes, including the new Online Platforms and Networks Standards (OPNS) proposal that they are developing to bring consistency to the way in which actors in the supply chain are held accountable.

Should the evidence gathered through the course of this consultation demonstrate that the current self-regulatory approach is not sufficient and the regulator requires stronger powers of enforcement, we could seek to strengthen the statutory backstop for advertising regulation. In addition to the formal backstop arrangements the ASA already has in place with a range of statutory regulators and, separately, the powers already exercised by statutory regulators in relation to online advertising, this would introduce a statutory regulator to backstop additional areas of the ASA’s codes. This would involve a co-regulation arrangement in which the statutory regulator could delegate the writing and maintenance of codes to the industry self-regulator, with sign off required for any code changes. In the case of a statutory regulator backstopping the self-regulatory code, stronger regulatory powers (such as bans and fines) would likely only be applied in certain circumstances, for example to repeat offenders or those involved in disseminating illegal harms such as fraud.

We also ask for views from respondents on a full statutory approach. This would involve appointing a statutory regulator to introduce measures designed to increase transparency and accountability across some or all actors in the ecosystem. This would empower the statutory regulator to either build on the existing codes in use or design new codes for the actors in scope, and where they would be responsible for regulating all aspects of the codes (rather than just repeat offenders or more serious breaches). Chapter 6 sets out these three options for levels of regulatory oversight which would dictate how prescriptive a regulator could be in holding different players to account.

In order to build on the measures already in place for advertisers, we suggest further proposals that could be implemented to complement the existing CAP Code. We also seek to consult on new measures to hold intermediaries, platforms, and open display publishers to account, which may be implemented through a code like the OPNS, or through either the backstopped or full statutory approaches detailed in chapter 6. These measures are designed to improve transparency and accountability throughout the system and we would welcome views from respondents on their expected impact, both in terms of benefits and costs.

The Online Advertising Programme aims to maintain the UK advertising sector’s world-leading position by building a robust, coherent and agile regulatory framework that is equipped with the right tools to increase transparency and accountability across the supply chain. By looking at this issue, the government intends to lower the risk of both existing and novel online advertising harms that may arise in future. We see developing effective regulation as a means of improving trust in this critical market, which will in turn protect its overall sustainability. This will have benefits for business as well as consumers.

1. The scope of the programme and context for online advertising reform

1.1 Wider digital regulation landscape and upcoming reform

This government is pro-innovation and recognises the value that digital and creative economies add both socially and economically. However, as digital technologies underpin more and more of our economy, society and daily lives, we need to ensure that the rules governing them keep pace. We are cognisant of the complexities and interdependencies in considering the regulation of online entities, and that no single intervention will address all issues.

In the Plan for Digital Regulation, the government set out its vision to drive prosperity through a coherent, pro-innovation approach to the regulation of digital technologies, while minimising harms to the economy, security and society. Within this, the government specifically committed to help keep the UK safe and secure online. This means that citizens are empowered to be safe online, and can trust they are protected from online harms beyond their control, and that consumers can trust that they are treated fairly, and have choice over the services they access.

The approach we are taking in the Online Advertising Programme (OAP) reflects the vision set out in the government’s Plan for Digital Regulation, and will ensure that proposals for reform encourage the use of tech as an engine for growth in order to drive prosperity and create competitive and dynamic digital markets.

Throughout the development of the OAP, we will continually examine the interdependencies and overlaps between online advertising and other digital regulatory initiatives. We are committed to ensuring that we do not duplicate these efforts, but instead achieve forward-looking and coherent regulatory outcomes. We aim to create a regulatory framework that can deal with the problems we face today, with the flexibility to adapt to new threats and opportunities as they develop in the future. We also recognise this is a global marketplace and issues related to online advertising are not unique to the UK. The UK wants to lead the way in creating a new rulebook for digital technologies which is effective, agile and pro-innovation.

In addition to existing digital regulation that is relevant to online advertising, there are other regulatory interventions in the pipeline that will likely have an impact on the online advertising ecosystem. These include reforms to tackle harmful content online, promote competition in digital markets, deliver a new data protection regime and ensure regulators working across the digital landscape deliver a coherent, innovation-friendly digital regulation approach.

1.1.1 The Online Safety Bill

The forthcoming Online Safety Bill will introduce statutory requirements on services that enable users to share content and interact with each other, as well as for search services to protect their users from harm, while protecting freedom of expression. Companies will have duties to take action to minimise the proliferation of illegal content and activity online and ensure that children who use their services are not exposed to harmful or inappropriate content. The biggest tech companies will also have duties on legal content that may be harmful to adults. This regulation will be enforced by the Office of Communications (Ofcom).

Both the Online Safety Bill and the OAP seek to build on the government’s Plan for Digital Regulation by working to ensure that individuals are protected from illegal and legal but harmful online content. Paid-for advertising is largely out of scope of the Online Safety Bill, which has been designed to regulate user-generated content on user-to-user services and search services. It involves additional companies to those the government is regulating via the Online Safety Bill, with paid-for online advertising disseminated via distinct channels compared to user generated content. Moreover, many of the harms associated with advertising differ from those being regulated within the Online Safety Bill. The Online Safety Bill is therefore not the right vehicle by which to regulate the many and varied services involved in the online advertising ecosystem, including ad tech[footnote 1] and intermediaries.

However, in light of the need for urgent action to address the issue of fraudulent advertising, the government has decided to introduce a standalone duty to tackle fraudulent advertising in the Online Safety Bill. The duty will increase protections for consumers by requiring high-reach and high-risk services that are already in scope of the Online Safety Bill, as well as the largest search engines, to put in place proportionate systems and processes to prevent (or “minimise”, in the case of search services) individuals from encountering fraudulent advertising on their service. Ofcom will set out further details on how services can meet their duties in codes of practice. Additionally, all companies in scope of the Online Safety Bill will need to take action to tackle fraud, where it is facilitated through user-generated content or via search results.

This decision also responds to recommendations from the Joint Committee scrutinising the draft Online Safety Bill to tackle fraudulent advertising, and the duty will go a long way toward protecting consumers online and preventing criminals from accessing paid-for advertising services. The duty is designed as a standalone measure, and paves the way for additional, complementary work through the OAP to look at the role of the entire ecosystem in relation to fraud, as well as other harms caused by online advertising.

In relation to fraud, the OAP will build on the duty in the Online Safety Bill, seeking to address whether other actors in the supply chain, such as intermediaries, have the power and capability to do more. The OAP will focus on the role of ad tech intermediaries in onboarding criminal advertisers and facilitating the dissemination of fraudulent content through use of the targeting tools available in the open display market. This will ensure that we close down vulnerabilities and add defences across the supply chain, leaving no space for criminals to profit.

The launch of this consultation coincides with the introduction of the Online Safety Bill to Parliament, following a period of pre-legislative scrutiny. This is to demonstrate that the government views these two programmes as complementary, and acknowledges the significant interactions between them. While the focus of the Online Safety Bill is largely the harm created by user generated content, the OAP will examine and seek to address the harm created by online advertising. In this way, there will be a coherent legislative framework that meets the aims of the Plan for Digital Regulation regarding keeping the UK safe and secure online, driving competition and innovation in the online advertising market, and promoting plurality of online content, which is a crucial element in a flourishing democratic society.

1.1.2 The new pro-competition regime for digital markets

Work on the OAP will also complement the government’s work to establish a new pro-competition regime for digital markets. This follows recommendations set out in the Competition and Markets Authority’s (CMA) 2020 online platforms and digital advertising market study. The new regime will promote competition and competitive outcomes, and as such competition issues are out of scope for the OAP.

In July 2021, the government set out its proposals for a new pro-competition regime for digital markets in a public consultation. The regime will drive a more vibrant and innovative economy across the UK, overseen by a new Digital Markets Unit (DMU) within the CMA. This unit will have the bespoke regulatory toolkit required to address the unique issues arising from digital markets.

The regime will apply to firms designated with ‘Strategic Market Status’ in a given activity, via an evidence-based assessment of their market power. Its core objective will be to promote competition in digital markets for the benefit of consumers and it will be empowered to tackle both the underlying sources of market power and its consequences. Firms who are subject to the regime will be required to comply with new conduct requirements, and the DMU will have the ability to design iterative pro-competitive interventions to tackle the sources of harm in these markets.

The DMU is currently operating on a non-statutory basis, ahead of the government placing the regime on a statutory footing as soon as parliamentary time allows. It will then be for the CMA, as an independent regulator, to determine how to utilise its powers. The new rules will benefit businesses who rely on powerful tech firms including, in some circumstances, advertisers.

1.1.3 Data protection reform

As the government sets out in its National Data Strategy, published in 2020, data is a huge strategic asset and is the driving force of modern economies. The government recently set out its plans for a new pro-growth, innovation friendly data protection regime in the ‘Data: A New Direction’ public consultation (2021). The proposals look to build on key elements of the current UK General Data Protection Regulation (UK GDPR), such as principles around data processing, people’s data rights and mechanisms for supervision and enforcement, but will aim to reduce the burdens on organisations and businesses and ensure better data sharing between public bodies. The government is currently considering the responses to the consultation and will respond in spring 2022. Given the importance of data in aiding the targeting practices used in online advertising, data protection policy developments will be important to consider as we develop the OAP.

1.1.4 The Digital Regulation Cooperation Forum

The Digital Regulation Cooperation Forum (‘the Forum’) is a forum for regulators working across the digital landscape and is currently made up of the CMA, the Information Commissioner’s Office (ICO), Ofcom, and the Financial Conduct Authority (FCA). The Forum’s inaugural work plan for 2021/22 outlined three core priority areas:

  • developing strategic projects on industry and technological developments, including digital advertising technologies, algorithmic processing, and end-to-end encryption
  • identifying opportunities for join-up in regulatory approaches across priority areas, e.g. data protection and competition including the Age Appropriate Design Code, Video-Sharing Platform (VSP) regulations and Online Harms
  • building the skills and capabilities of the Forum members

The Forum intends to work closely with the Advertising Standards Authority (ASA), Prudential Regulation Authority (PRA), Payment Systems Regulator (PSR), the Intellectual Property Office (IPO), the Gambling Commission and other agencies as appropriate. As outlined in the Plan for Digital Regulation, the government sees the Forum as an important step forward in our ability to deliver a more coherent and innovation-friendly approach to digital regulation. We will engage the Digital Regulation Cooperation Forum in ensuring regulatory reforms relating to the online advertising market are coherent and effective.

1.2 Scope and purpose of the Online Advertising Programme

Within the wider context of digital regulation set out above, the OAP will examine the full spectrum of consumer and industry harms associated with all forms of paid-for advertising online. Online advertising is the use of online services to deliver paid-for marketing content. Advertising is paid-for when the placement is in part determined by systems or processes (human or automated) that are agreed between the parties entering into the contract relating to the advertising, and/or when the provider of the service receives any monetary or non-monetary considerations for the advertisement.[footnote 2] [footnote 3]

The harms associated with paid-for online advertising, which span both legal but harmful and illegal harms, across content, placement and targeting, are set out in the taxonomy of harms in chapter 3.

The OAP’s overall objective is to determine whether the current regulatory regime is sufficiently equipped to tackle the challenges posed by the rapid technological developments in online advertising. The OAP will ensure that the regulatory framework for online advertising builds trust and tackles the underlying drivers of harm in online advertising. The OAP will also consider the role and responsibilities of all actors involved in the supply chain of online advertising. The desired outcome is that regulators are given the appropriate powers and tools to effectively address issues in the online advertising ecosystem holistically and to take action on specific issues without the need for isolated intervention from the government.

The government will seek to develop a coherent, comprehensive advertising regulatory framework for all actors across the advertising supply chain. This focus will complement the government’s wider reforms on competition, data protection and user-generated content, ensuring that the online advertising market which is at the heart of our digital economy can continue to thrive. To fulfil this aim, our overarching principles for the OAP are to:

  1. Support a thriving online advertising sector by increasing trust and reducing harms to UK internet users.

  2. Tackle and improve the underlying drivers of harm in online advertising, including a lack of transparency and accountability, by considering the role of all actors in the supply chain.

  3. Develop coherent and proportionate regulation that takes the complexity of the online advertising ecosystem into account and enables effective regulation for current as well as new and emerging harms.

  4. Address the online advertising ecosystem holistically, only taking further action in specific types of advertising where evidence suggests the framework will not deliver effective protection.

  5. Complement related regulatory action already underway, including the CMA’s new Digital Markets Unit (DMU) which will oversee the new pro-competition regime, the FCA’s work in relation to financial promotions, the ICO’s work on data protection and privacy, and the Gambling Act review.

  6. Build on existing industry initiatives designed to address issues in online advertising, where they are delivering effective protections.

1.2.1 Market categories of online advertising in scope

We have set out below the market categories of online advertising that we are aware of which advertisers use to reach consumers, and that are within scope of this consultation. We have built on the CMA’s market study into online platforms and digital advertising to inform our understanding of these market categories. Broadly, the online advertising market can be broken into the five key market categories outlined in the table below, with additional explanatory descriptions provided in annex A.

Figure 1: Market categories of online advertising

Market category Supply chain Type of advert Example hosts of type of advert
Search Usually purchased directly from search providers through owned and operated environments ● Paid-for listings in search results, such as sponsored links or promotional listings. Google, Bing
Social Display Usually purchased directly from social media providers through owned and operated environments ● Range of advertising formats on social media platforms.

● Paid-for online video advertising, such as video adverts served before, during or after films, programmes or other non-advertising content on videos or video sharing sites.

● Paid-for online social media advertising, such as in-feed advertising on social media.
Facebook, Instagram, Twitter, LinkedIn, TikTok, Snapchat, Youtube
Open Display Purchased through open display, or bought directly from publishers. Some platforms also offer services to place advertising on third party publishing sites ● Paid-for banner ads on news websites and apps, swipe to buy.

● Paid-for in-game advertising, such as banner ads in games apps.

● Paid-for newsletter advertising, such as banner ads in a cookery newsletter.

● Paid-for advertising on internet served video-on-demand services

● Banner and video ads provided as part of the electronic programme guides and home screens of internet enabled TV sets.
The Guardian, Reach, Mail Online, ITV (including ITV Hub), Sky, Buzzfeed
Classified Purchased directly from platform or publisher ● Paid-for listings on price comparison or aggregator services, such as sponsored listings on food delivery, recruitment, property, cars and services. Gumtree, AutoTrader, Zoopla, Monster.
Content marketing, sponsorship and influencer marketing Procured through contracts with influencers, or arranged directly with the publisher ● Sponsored content on publisher or platform services, including paid promotion on creators’ social media posts

● Paid-for influencer marketing, such as influencer posts paid for/sponsored by an advertiser

● Paid-for product specific sponsorship

● Paid-for advertorials

● Paid-for advergames
Social media influencers, publishers or platforms who host sponsored editorial content

These market categories do not include unpaid advertising. This is to exclude from scope so-called ‘owned media’, which is any online property owned and controlled, usually by a brand. For owned media the brand exerts full editorial control and ownership over content, such as a blog, website or social media channels. It should be noted that the ASA regulates advertisements and other marketing communications by or from companies, organisations or sole traders on their own websites, or in other non-paid-for space online under their control, that are directly connected with the supply or transfer of goods, services, opportunities and gifts, or which consist of direct solicitations of donations as part of their own fund-raising activities.

However, we do intend to include influencer advertising in-scope of the OAP where payment has been made, either directly or in-kind, to an influencer in order to advertise a brand’s products and services. Recognising that as user-generated content, provisions in the forthcoming Online Safety Bill will also apply to influencers, the OAP will seek to consider whether there are any important gaps in coverage either from the forthcoming Online Safety Bill or other existing regulatory requirements, such as in relation to advertiser disclosure.

Consultation question 1

Do you agree with the categories of online advertising we have included in scope for the purposes of this consultation?

a) Yes
b) No
c) Don’t know

Do you think the scope should be expanded or reduced? Please explain.

Consultation question 2

Do you agree with the market categories of online advertising that we have identified in this consultation?

a) Yes
b) No
c) Don’t know

Do you think the scope should be expanded or reduced? Please explain.

1.2.2 Actors in scope

The OAP will look at the whole online advertising ecosystem in order to capture all the players in the supply chain that have the power and capability to do more to combat harmful advertising. We will consider the role played by each of the actors in the various supply chains, which includes:

  • Advertisers (brands):[footnote 4] Individuals, businesses, organisations which direct the content of a message within an online advertisement, directly or indirectly, in order to influence choice, opinion, or behaviour. Typically advertisers work alongside media buying agencies or creative agencies to develop and shape their message in order to produce the intended outcome (e.g. greater engagement/sales). Some advertising agencies are vertically integrated and have their own proprietary ad tech.
  • Intermediaries:[footnote 5] Businesses and/or services which connect buyers and sellers (e.g through programmatic trading), facilitate transactions, and leverage data to provide buyers with targeting options for online advertising.[footnote 6]
    • Ad tech:[footnote 7] Used to refer to all ad tech intermediaries, including DSPs, SSPs and ad servers (see below).
    • Ad servers:[footnote 8] Publisher ad servers manage the publisher’s inventory and provide the decision logic underlying the final choice of which ad to serve. Advertiser ad servers are used by advertisers and media agencies to store the ads, deliver them to publishers, keep track of this activity and assess the impact of their campaigns by tracking conversions.
    • Demand-side platforms (DSPs):[footnote 9] Provide a platform that allows advertisers and media agencies to buy advertising inventory from many sources. DSPs bid on impressions based on the buyer’s objectives and on data about the final user.
    • Supply-side platforms (SSP):[footnote 10] Provide the technology to automate the sale of digital inventory. They allow real-time auctions by connecting to multiple DSPs, collecting bids from them and performing the function of exchanges. They can also facilitate more direct deals between publishers and advertisers.
  • Online platforms:[footnote 11] Ad-funded platforms seeking to attract consumers by offering their core services for free. To promote their ad services they combine the attention of their consumers with contextual or personal information/data they have collected to serve advertising. Such platforms include (but are not limited to) search engines and social media sites.[footnote 12] These platforms may also serve advertising to other publishers. We include within this definition ad-funded Video-Sharing Platforms (VSPs).
  • Open display publishers: Attract audiences through providing content and opportunities for advertising placement.
  • Internet served Video-on-Demand (VOD) services: VOD services or On-Demand Programme Services (ODPS) which serve adverts to audiences via the internet.

By exploring how to ensure that all actors in the supply chain take an active role in minimising harms, we are seeking to ensure that the impacts of a coordinated approach can be more significant than focusing on one part of the supply chain, while ensuring that any regulatory burdens are minimised and fairly shared across the ecosystem. In doing so, we aim to create more space for trust in advertising to grow, with ultimate benefits for the sustainability of the market.

To clarify the relationship on fraudulent advertising between the Online Safety Bill and the OAP, we have set out in the box below the actors in the supply chain who will be held accountable through the forthcoming Online Safety Bill, with those who are in scope for the OAP. As per the actors in scope listed above, we take a broad view of intermediaries, and the actors listed in the table below are exemplary of the groups of intermediary organisations in scope, not exhaustive.

Actor In scope of OSB In scope of OAP
Advertisers (including agencies) No Yes
Ad servers (Intermediary) No Yes
Demand-side platforms (Intermediary) No Yes
Supply-side platforms (Intermediary) No Yes
Platforms Yes Yes
Publishers (other hosts of online ads) No Yes

Consultation question 3

Do you agree with the range of actors that we have included in the scope of this consultation?

a) Yes
b) No
c) Don’t know

Do you think the range should be expanded or reduced? Please explain.

1.2.3 Harms in scope

The OAP will focus on the harms directed towards or experienced by consumers, both intentionally by bad actors in the system (such as fraudulent adverts, adverts for illegal activities and adverts that cause serious or widespread offence), or unintentionally. We will also be considering harms to advertisers and industry, including brand safety concerns driven by the placement of advertising next to inappropriate or harmful content, as well as the potential for advertising to fund harmful content (such as sites carrying misinformation or disinformation). Chapter 3, which includes a full taxonomy of harms, provides further detail on the various harms we are seeking to cover as part of this work. We welcome responses that provide evidence and suggest remedies that are not already adequately addressed by existing regulatory mechanisms to tackle the full spectrum of different harms associated with online advertising.

1.2.4 Issues out of scope

The following issues are out of scope of the OAP:

  • Privacy issues - the ICO is examining the use of ad tech in the targeting of adverts to consumers through programmatic advertising.
  • Whilst the use of data is relevant for targeting methods, data policy is out of scope for this consultation and is being considered through the separate consultation: Data: a new direction.
  • User-generated content (except where it is also paid-for advertising) - this will be covered by the forthcoming Online Safety Bill.
  • Political advertising - this has not been subject to advertising Codes since 1999. The government believes that having political advertising vetted or censored would have a chilling effect on free speech.
  • Competition issues - this will be dealt with by the new pro-competition regime for digital markets.

2. The online advertising market

2.1 The value of advertising

Advertising contributes to the economic, social and cultural life of the UK. The UK has a thriving advertising market, with a total industry turnover of £40 billion in 2019 (for more information see the Annual data on size and growth within the UK non-financial business sectors as measured by the Annual Business Survey). It generated £17 billion in Gross Value Added and exported £4 billion in services in 2019. Whilst initially hit by the COVID-19 pandemic, the industry has made a remarkable comeback and profits are on track to be greater than initially predicted in 2021. An Advertising Association expenditure report forecast that the UK ad spend would total £29.3 billion in 2021, a year-on-year increase of 24.8%. Of this, online advertising spending in the UK was £16.47 billion in 2020, including £8.37 billion in search advertising, £6.31 billion in display advertising, and £0.98 billion in classified advertising (see section 1.2.2 for a definition of the different market categories).

Aside from the economic contribution of the industry, the sector makes a significant contribution to our social and cultural life, drawing in a wealth of creative talent. Employment in the UK was estimated at around 201,000 in 2020, with the sector bringing in a host of international workers, and many global agencies headquartered in the UK.

Advertising spend across all channels in the UK was £23.9 million in 2019, up 31% from 2012 (figure 2). Online advertising has come to sit at the heart of the digital economy and spending has steadily grown over the period to reach £14.3 million in 2019, a huge increase of 144% since 2012. The advertising market is dynamic and businesses have moved towards the channels that are giving them access to large audiences and demonstrating positive returns on investment. This has meant advertising has moved away from some channels with TV, print and direct mail seeing decreased spending over the period.

Figure 2: UK Advertising expenditure by channel, 2012-2019

Advertising Expenditure by channel (£m)

Year Cinema Radio Out of Home Direct mail Print TV Online Total
2012 £232m £595m £1,044m £2,022m £3,787m £4,708m £5,862m £18,250m
2013 £194m £564m £1,039m £2,007m £3,359m £4,741m £6,569m £18,437m
2014 £203m £595m £1,054m £1,950m £3,063m £4,928m £7,584m £19, 377m
2015 £246m £612m £1,094m £1,977m £2,729m £5,264m £8,913m £20,835m
2016 £258m £635m £1,158m £1,785m £2,348m £5,216m £10,382m £21,782m
2017 £260m £644m £1,144m £1,753m £1,938m £4,897m £11,553m £22,189m
2018 £254m £668m £1,209m £2,022m £1,720m £4,941m £11,987m £22,801m
2019 £313m £653m £1,301m £1,385m £1,536m £4,478m £14,285m £23,951m
Source: AA/WARC, Expenditure report, April 2012-2019

Online advertising is the fastest growing area of advertising globally. Global investment in ad spend is predicted to be in the region of $700 billion by the end of 2022, with digital advertising contributing to around half of that figure. The UK is the largest market in Europe for advertising and fourth largest in the world by ad spend. A significant boost to the UK’s advertising industry has been its strong position in the development of advertising technology, which has attracted over £1 billion in investment since 2013.[footnote 13]

The rise of online advertising to become the dominant modern advertising medium reflects the significant shift seen in consumer consumption habits (and therefore eyeballs) over the last decade from traditional media, such as newspapers and TV, to online formats. Over four fifths of the adult population used the internet in 2021[footnote 14], with the average time spent online per week increasing from 14 hours to just under 25 hours from 2010 to 2020.

With the growth in internet consumption, advertising has become the primary source of revenue for many online businesses and underpins the provision of key online services such as search and social media. These services are positively transforming people’s lives, many of which we increasingly find hard to imagine living without. However, access to these services for consumers generally means the acceptance of advertising that is targeted at them, sitting alongside the content they enjoy. For businesses, online advertising offers high levels of personalisation and efficiency in reaching audiences, which is cost effective as well as convenient to both advertisers and consumers. It can help consumers to discover valuable new goods, interests and services, creating more accessible and low cost routes for businesses to engage with their audiences.

While online advertising offers many benefits for businesses and consumers, it is worth noting that the CMA’s study of platforms funded by digital advertising found that the largest platforms, Google and Facebook, and the incumbency advantages that these firms enjoy, caused a number of serious issues for other firms and consumers. The CMA found that ‘weak competition in search and social media leads to reduced innovation and choice and to consumers giving up more data than they would like’. The summary box in the market study final report states that ‘weak competition in digital advertising increases the prices of goods and services across the economy and undermines the ability of newspapers and others to produce valuable content, to the detriment of broader society’. These dynamics may therefore be eroding cost efficiency that could be enjoyed by a wide range of businesses in the marketplace.

This increased share in the advertising market enjoyed by online advertising, coupled with the related revolution in business practices that has accompanied it, has placed increasing pressure on the regulatory framework for advertising - originally designed in the offline era - to remain effective and relevant.

2.2 The online advertising ecosystem and market dynamics

The online advertising market is a continually evolving, dynamic market, with fast paced innovation in business practices. We recognise that an accurate and up-to-date view of the market dynamics within online advertising is critical to effectively assessing the appropriateness, proportionality, relevance and impact of potential interventions that could be taken to reform the regulatory framework. We use a range of terminology throughout this document, with a glossary set out at annex B.

In its most simplified form, the online advertising system involves businesses that want to advertise first designing a campaign, often working with advertising creative agencies and media buying agencies.

  • Advertisers, working with their buyers, form the ‘demand’ side of the market and will then purchase advertising space in order to reach their desired audiences.
  • Advertising space is provided by platforms and publishers, who form the ‘supply’ side of the market and can sell access to their audiences in order to fund their businesses.
  • Between these sides of the market, advertising intermediaries may operate to facilitate transactions, leverage data or provide other services. The range of intermediaries between advertisers and publishers are sometimes referred to as ‘ad tech’ or the ‘ad tech stack’. Online advertising can be highly targeted to specific consumer interests using audience data and real-time automated systems provided by these intermediaries or platforms.

Online advertising provides a number of different ways for reaching consumers, with advertisers often using a combination of these supply chains to reach audiences. Each market category (set out in chapter 1) provides advertisers with a vehicle to advertise their services or products with consumers.

There are two primary channels through which advertisers can purchase advertising space online:

(i) buying space directly from ad-funded platforms who offer an integrated ad buying service - these systems are sometimes referred to as ‘owned and operated’ systems or ‘walled gardens’; or

(ii) through the open display market, whose supply chain matches advertisers’ buying specifications at one end, with advertising space across a range of open display publishers at the other.

2.2.1 Programmatic advertising

Programmatic advertising is the use of automated systems and processes to buy and sell inventory. This technology is in widespread use across the online advertising ecosystem. Selling advertising programmatically means that the selection, pricing and delivery of adverts to selected audiences are organised through automated computerised algorithms. The selection and targeting of audiences in programmatic systems are heavily reliant on the use of data (this can be a combination of personal and contextual data) by platforms and intermediaries, in order to personalise advertising.

Although some businesses and advertisers traditionally choose to make direct deals for display advertising, programmatic technology is used to support a variety of online transactions and continues to be popular given the level of sophistication it can offer. Advertisers want to maximise their return on investment, and targeting specific niches of potential consumers through the use of programmatic technology is one way to make sure that they are efficient.

Practices surrounding the use of data in programmatic advertising have been changing in recent years following new rules governing the use of cookies, which have influenced trends towards an increasing use of contextual data over personal data. Other innovations in the industry are likely to have further implications, such as those designed to offer an alternative to cookie-based approaches. Though data driven programmatic systems sit at the heart of most transactions today, the online advertising ecosystem is not static and market practices change over time.

Programmatic technology has opened up advertising markets to SME advertisers in a way that was not available to them before, enabling them to effectively target advertising cheaply and efficiently - offering them an alternative to advertising bought through platforms. At the other end of the supply chain, publishers of any size can also sell inventory programmatically, enabling them to reach a greater range of advertisers.

2.2.2 Online advertising supply chains

This section sets out the supply chains that form the primary channels advertisers use to reach consumers.

Platforms and the ‘Owned and Operated’ or ‘Walled Garden’ model

Most of the larger, ad-funded platforms, including Google and Facebook/Meta (who according to the CMA’s Online platforms and digital advertising market study made up around 80% of the online advertising market in 2019), operate closed supply chains. These services deliver a variety of marketing channels and targeting services in-house. In such a system, the platform owns the relationship with both the audience and advertiser, with no other party (other than potentially a media agency) involved in the buying and selling of its advertising inventory. Such a business model is also sometimes referred to as a ‘walled garden’ or ‘owned and operated’ system.

Some platforms, including both Google and Facebook/Meta, offer advertiser ‘self-service’ options, which means smaller operators can directly purchase advertising with little friction, creating a low bar to entry. High market share, ownership of key technologies in-house, and strong user data assets, lead to larger platforms such as Google and Facebook/Meta having more bargaining power. The CMA’s market study found that typically publishers are unable to negotiate the terms of their relationship with Google and Facebook.[footnote 15]

The below diagram (figure 3) offers a simplified view of the process used by platforms to serve advertising to users in a ‘walled garden’ or owned and operated system.

Figure 3: Owned and operated platform supply chain (simplified)

Figure 3: Owned and operated platform supply chain (simplified)
Notes on figure 3:

1. DCO = dynamic content optimisation.
2. This diagram simplifies the supply chain. In some cases, advertisers interface directly with platforms, without using a media agency. In some cases, advertisers or media agencies use tools such as Smartly.io to manage campaigns on platforms. In some cases, publishers distribute content on platforms in return for a share of advertising revenue and/or the right to sell advertising on their content.
3. Buying channels and platforms, and tools and services differ between platforms.

Open display markets

The main alternative vehicle for buying online advertising inventory to owned and operated systems is the open display market. Open display is a sector of the market that uses programmatic selling/buying through an intermediated model and is often used to connect advertisers to a range of online publishers, including news publishers. Advertising intermediaries, or the ‘ad tech’ industry, have evolved to meet the needs of these main groups of actors, advertisers and publishers. Advertisers exist on what is called the demand side: they are interested in reaching online audiences with their message. Publishers then exist on the supply side, operating websites or apps and seeking to monetise their services through selling advertising inventory.

The open display market is made up of a complex chain of businesses providing specific functions within the advertising supply chain.

On the demand side, the main participants include media agencies, advertiser ad servers, and demand-side platforms (DSPs). Advertiser ad servers are used by advertisers and media agencies to store information about ads and advertising campaigns, as well as to deliver, track and analyse campaigns. A DSP is a platform that allows advertisers and agencies to purchase targeted ad impressions from many different sources through a single interface. Real time bidding is typically used to execute this process.

On the supply side, the main participants include supply-side platforms (SSPs) and publisher ad servers. SSPs provide the technology for publishers to sell ad impressions through a range of external platforms (for example DSPs), allowing them to sell all of their inventory through a single interface. They can also be used for more direct deals between publishers and advertisers. Recently, SSPs and ad exchanges have largely merged to the point where these terms are often used interchangeably. Based on the bids received from different SSPs, and the direct deals agreed between publishers and advertisers, publisher ad servers manage publishers’ inventory and the final decision on displaying online advertising content to users.

The industry also includes further market participants involved in the provision and management of data, targeting practices and analytics in online advertising including data suppliers, data management platforms (DMPs), and measurement and verification providers.

Options for purchasing advertising space range from use of the full programmatic supply chain through to direct contracts between advertisers and some publishers. Some publishers are also expanding their own capability to offer more tailored direct services to advertisers, including more advanced audience targeting. This may in turn be leading, in some cases, to increased use of direct contracts, which can bypass some of the open display supply chain. The CMA market study concluded that open display comprises around 32% of display expenditure online.

The below diagram (figure 4) provides a simplified illustration of the online open display supply chain, including the intermediaries involved in dissemination through this method.

Figure 4: Programmatic open display supply chain (simplified)

Figure 4: Programmatic open display supply chain (simplified)
Notes on figure 4:

1. DSP: demand-side platform.
2. SSP: supply-side platform.
3. Some ad tech competitors operate at multiple levels of the supply chain (e.g. Google, Amazon, Yahoo!).
4. This diagram simplifies the supply chain. Not all categories of ad tech vendors are shown, such as header bidding solutions and ad networks. In some cases, advertisers interface directly with ad tech vendors, without using a media agency. Some publishers sell ad inventory directly, involving manual orders or the use of an automated buying platform, without DSP and SSP involvement.
5. The ad tech ecosystem and roles are rapidly evolving.

Publishers in the open display market

We seek to draw a distinction between publishers in the open display market, such as online news publishers, and platforms operating owned and operated systems. Platforms tend to oversee the process of matching adverts to advertising space, whereas publishers are at the receiving end of the open display supply chain, and thus may have less control over what is advertised in their inventory. There is also a key difference in the scale, reach and resources available to these players which means we would expect any regulation in relation to their role to be proportionate. Open display publishers often rely on advertising for their financial sustainability and declining ad revenue has led some publishers to develop alternative business models. Many publishers apply controls over the quality and content of advertising on their sites due to reputational damage risks.

The CMA market study identified that some publishers of online content rely on large digital platforms, such as Google and Facebook’s user-facing services, to host content or for referrals of traffic to their online properties, which they can then monetise by displaying advertising to these visitors. They concluded that those publishers face an imbalance of bargaining power with Google and Facebook, which disadvantages their businesses in a number of ways including restrictions on their ability to control their own content and data, to manage traffic to their websites and to target advertising.

Online advertising buying services, such as media agencies

Most larger brands secure the services of media agencies to help them develop and deliver advertising strategies for the products and services that they wish to advertise. Advertising campaigns are generally complex and will involve buying advertising content over a range of channels both on and offline. Whilst it is possible for brands and advertisers to buy advertising space direct from both ad funded platforms and/or publishers, larger brands may use the services of a media buying agency whose job it is to procure relevant advertising space from across the online advertising ecosystem, in line with the agreed buying strategy for the brand’s products and/or services.

In addition to this, some agencies have developed their own proprietary tools and can therefore offer a range of services to their clients, such as data aggregation, offering clients some services which were previously provided by intermediaries. In this way, some agencies’ roles in delivering advertising have become more complex and vertically integrated.

Whilst media agencies play a key role online, in recent years it has also become common for brands to establish direct relationships, particularly with the ad funded platforms, meaning that media buying agencies do not always intermediate all transactions online. We understand this has had a consequential effect for the Advertising Standards Authority (ASA) funding levy, which is collected by the Advertising Standards Board of Finance (Asbof), primarily from media agencies. The increase in brands and advertisers buying advertising space direct from platforms, and thereby escaping the levy, has only been partly offset by financial contributions by some platforms, including Google and Meta. It is also the case that small and medium-sized enterprises (SMEs) who may not have complex, multi-channel buying strategies, are less likely to use media agencies.

New developments

In recent years, we have seen changes surrounding the use of data online, such as the introduction of new policies by big tech companies to limit the targeting of users. For example, in the place of cookies (which are being phased out on Chrome and have already been removed by default from Safari and Firefox), Google aims to implement replacement technologies, and its ‘Privacy Sandbox’ is testing approaches. This will still allow websites to show targeted adverts, while reducing the amount of information users share.

In the same vein, Apple’s new privacy measures include turning off the “identifier for advertisers” (IDFA) used for tracking by default. This means Apple users will have to grant apps explicit permission to use it and will thus likely significantly reduce the data available to third parties. In addition, the Meta group has recently announced that it will be removing some of its controversial targeting services (such as political affiliation, religion and sexual orientation).

Taken together, these changes may have a significant impact on the way online advertising is targeted, and are not without considerable competition concerns.[footnote 16] We are also aware of growth in new online media and advertising formats, including in-game, live influencer and voice, amongst others. As these formats grow, the advertising supply chain is expected to evolve further, and we will continue to consider these market changes as we develop the OAP.

Implications for reform

Understanding the way in which advertising space is typically purchased and disseminated online, and the different services that can be used within the online advertising ecosystem to reach audiences, is key to successfully considering appropriate regulatory solutions. Those regulatory solutions will need to successfully reflect the main market dynamics, such as the size, role, reach and resources available to the different players, the technologies they use and the activities in operation across all routes to market. We also recognise that such solutions will need to be proportionate for market participants of all sizes and flexible enough to respond to market changes.

Consultation question 4

Do you agree that we have captured the main market dynamics and described the main supply chains to consider?

a) Yes
b) No
c) Don’t know

Please explain your answer.

Consultation question 5

Do you agree that we have described the main recent technological developments in online advertising in this section (section 2.2.2)?

a) Yes
b) No
c) Don’t know

Please explain your answer.

3. Harms caused by online advertising

3.1 Summary of the problem

As set out in the previous chapters, the online advertising industry has experienced rapid growth as online media consumption has increased. However, the size and growth of the sector has led to concerns about potential harms to consumers, firms and wider society.

Building on the call for evidence and research commissioned by DCMS over the past few years, we consider that there are a range of harms that can be attributed to online advertising which require urgent consideration to combat.

We propose these harms can largely be divided into two key areas:

a) the content of adverts; and

b) the targeting or placement of adverts.

We also recognise that harm can be exacerbated through the combination of the content of an advert and its targeting or placement, and that the risk of harm, particularly where content of an advert is combined with how it is targeted, can be greater for some groups than it is for others.

3.2 Call for evidence and previous commissioned research

3.2.1 Commissioned research

DCMS commissioned Plum Consulting to produce two reports on the online advertising landscape. A summary of these reports is set out below.

Online advertising in the UK (2019)

The study explored the structure of the online advertising sector and the movement of data, content and money through the online advertising supply chain. The report took stock of the UK’s online ad market and drew implications for consumers, society and the economy. Key findings from the report included:

  1. Census Given the current programmatic advertising market structure and practices, it is not possible to develop robust, independently verified, census-level data for the share of advertiser investment received by publishers or intermediaries.
  2. Potential harms Harms could be broken down into three broad categories:
    - Individual harms - potential impacts on individual firms and consumers which could include brand risk and inappropriate advertising.
    - Societal harms - practices which may be detrimental to society as a whole, such as discriminatory advertising through targeting.
    - Economic harms - potential harms that may arise from lack of competition or inefficiencies within the sector.
  3. Social media placement Placing advertising within content, social or product feeds accounted for 92% of UK ‘native advertising’ (advertising which integrated into the surrounding content in a non-interruptive manner) expenditure in 2017.
  4. Targeting of online advertising: Increasingly, advertisers are creating profiles of their customers encompassing multiple data points, such as demographics and behaviour. They then seek to target “lookalike” audiences online.

Mapping online advertising issues, and industry and regulatory initiatives (2020)

The overarching findings from the 2020 report were that without an adequate regulatory system, consumers will not be sufficiently protected from the harms associated with online advertising. In addition, without regulation across industry it potentially leaves a gap for unregulated individuals and firms, and this is a significant area for improvement.

In order to tackle this issue effectively, Plum cited the below key areas as requiring intervention:

  1. A lack of a coherent consumer protection framework for online advertising issues. Intermediaries and online platforms do have certain obligations under consumer law – for example, the Consumer Protection from Unfair Trading Regulations 2008 (CPRs) apply to platform operators where they act as traders and are engaged in commercial practices (as defined by that legislation). However, in general, there is room for a more coordinated and clearly signposted mechanism for consumers to report inappropriate ads and seek redress. The issue is compounded by overlaps in regulatory structure and responsibilities, which makes enforcement potentially difficult and time consuming. Also, the nature and causes of some of these harms often go beyond online advertising, underlining the need for closer coordination to improve regulatory effectiveness.
  2. Better data for monitoring purposes. Outside of the ASA, there is no independent measure for the effectiveness of the self-regulatory system. Without a requirement for organisations to share data there will continue to be an issue around transparency and accountability. The ASA does not have information gathering powers (as there is no underpinning legislation for this), meaning any cooperation from industry is based on goodwill.
  3. Limited regulatory oversight of online platforms. Currently, the ASA’s self-regulatory system for non-broadcast advertising primarily applies to advertisers with limited application to other actors in the supply chain. Whilst others - e.g. online platforms - have their own standards and codes to adhere to, there is no single regulatory body or organisation that is specifically responsible for ensuring these issues are enforced effectively or reporting on whether they are sufficiently tackling the issues they seek to address.
  4. Limitations of the incentive-based system. Incentives and sanctions for a small minority of ‘bad’ actors who operate with criminal intent do not act as a strong enough deterrent. They are capable of making sure ‘good’ actors such as legitimate companies and sole traders stay in line, but for those looking to commit fraud and other illegal activity, there is not a strong enough sanction for them to comply.
  5. Underdeveloped guidance on the potential issues associated with targeting. Current guidance is predominantly focused on children - the report suggests that we need to consider this further including the development of guidance for other vulnerable groups.
  6. Limited scope and reach of consumer awareness and public education initiatives. Whilst there is work being done in this space (by government and industry), there is still a lot of work to develop in this area to ensure consumers are equipped with the tools to be safe online.

3.2.2 Call for evidence on online advertising

As the second Plum report was being developed, DCMS also held a call for evidence on online advertising in 2020. Many of the responses received supported the conclusions of the Plum report, demonstrating a consistency in the issues requiring intervention. We asked ten questions as part of the call for evidence, which can be broadly categorised under the themes listed below. This summary outlines the views provided by the 50 respondents.

The extent to which consumers are exposed to harm by the content and placement of online advertising

The biggest concern raised by respondents to the call for evidence related to fraudulent advertising (including scams), with others citing concerns around legal but harmful advertising themes in advertising, such as body image. These were particularly highlighted by consumer and civil society groups who raised concerns about, for instance, the exposure of young audiences to adverts for products and services like diet pills and plastic surgery. Misleading adverts were also listed as being of concern, but to a lesser extent. Lastly, respondents raised concerns that there was evidence of advertising being used to fund misinformation, with one respondent noting that fake news sites in Europe could be earning in the region of $75 million of advertising per year.

Respondents also commented on the use of data and targeting in online advertising, stating that some practices around the personalisation of adverts were causing harm. One trade association highlighted that 86% of consumers wanted to have greater control over their data and 88% wanted to have greater transparency on how their data is used to ‘follow them around the internet’, and child safety groups raised concerns that influencers directly target audiences with posts which aren’t fully disclosed as ads. Many cited experiences of consumers who felt that the ads being targeted at them were very personal and invasive, for example those struggling with mental health issues.

The discrepancies between broadcast and online advertising regulation including the lack of a 9pm watershed equivalent for online advertising were noted by respondents. Although the 9pm watershed was viewed as imperfect, respondents did feel it was effective in providing a degree of insulation from adult TV content and advertising which was lacking in an online sphere.

The effectiveness of the current governance and regulatory framework for online advertising

The majority of stakeholders (59%) called for significant regulatory reform, with many saying the current regulatory system was insufficient. In particular they highlighted a lack of compliance to regulations due to inadequacy of funding and effective enforcement powers. Respondents reported there was a lack of transparency in the online advertising ecosystem. Respondents reported a lack of accountability for actors besides advertisers in the system, including the role of platforms and intermediaries. To combat this respondents encouraged further government intervention through further commissioned research or by looking to introduce more stringent regulation.

Respondents overall did not feel that the industry’s current initiatives went far enough, leading to a concern around a fundamental lack of trust in online advertising. Concerns regarding unethical targeting were highlighted by respondents who felt that personalised adverts followed them around the internet and that categorising or targeting in online advertising felt overly personal and invasive.This caused several stakeholders to flag the increased risk of invasive targeting of vulnerable people, such as people suffering from mental health conditions.

Respondents also flagged concerns surrounding the damage which online advertising could cause to other forms of media through broadcast discrepancy and to a lesser extent damage to the press sector.

The regulatory reform respondents want to see

A significant proportion of respondents said they favoured ‘beefing up’ the regulatory system and providing regulators with better funding and greater powers for enforcement. Responses suggested that the current standards and regulations in place were not doing enough to protect individuals. Whilst respondents pointed to the potential risks of harm associated with the use of data and targeting, they also pointed to how such technology could help form part of the solution - for example, by targeting harmful content away from vulnerable individuals.

The call for evidence responses overall indicate that the primary concern for the majority of respondents was a fundamental lack of trust in online advertising. There were several driving factors behind this broken trust, including that of a regulatory framework which needs to be strengthened as well as a lack of accountability and transparency across the board. Due to the lack of transparency in the system, there were substantive evidence gaps and it was not straightforward to unpick the dynamics at play in this market.

The chart below (figure 5) represents harms reported by respondents to the call for evidence. The most reported harms were offensive / harmful ads, malicious ads and ads for fraudulent goods and services.

Figure 5: Breakdown of harms reported by respondents to the 2020 call for evidence

Call for Evidence: harms reported (% of respondents)

Harms No. of respondents % of respondents
Malicious Ads / Scams 16 31%
Offensive / harmful ads 14 27%
Ads for illegal, restricted, counterfeit or fraudulent goods/services 11 22%
Brand Safety 9 18%
Ad fraud 8 16%
Mis/disinformation 6 12%
Misleading ads 5 10%
Fake endorsement 3 6%
Non-identified ads 3 6%
Ad blocking 3 6%
Bombardment 3 6%
Fragmented consumer reporting 3 6%
Keyword blocklisting 1 2%

The chart below (figure 6) sets out how different stakeholder groups viewed the current regulatory framework at the time of the 2020 call for evidence. Given the number of respondents, and that regulatory and market developments since then are not reflected, we are seeking to update this assessment in this consultation.

Figure 6: Assessment of the self-regulatory system by stakeholder group

Organisation Type Poor Neutral Good Total
Broadcasters 66.6% 33.3% 0 100%
Consumers Groups 83.3% 16.6% 0 100%
Online platforms 16.6% 16.6% 66.6% 100%
Publishers 33.3% 33.3% 33.3% 100%
Advertisers 60% 0 40% 100%
Regulators/industry 60% 20% 20% 100%
Child Safety Groups 25% 75% 0 100%
Research bodies/academics 25% 75% 0 100%
Rightsholders 0 100% 0 100%
Individuals/businesses 100% 0 0 100%

3.3 Taxonomy of harms caused by online advertising

The proposed taxonomy of harms below includes a spectrum of harmful online content and placement which we understand to be caused by or exacerbated through online advertising. These range from high-profile illegal content, to those harms which are legal but can still be harmful. The harms apply to both consumers and to industry.

We consider the full taxonomy of harms to fall into scope for consideration and potential action under the OAP. This is intended to be a comprehensive taxonomy and the government welcomes feedback on whether there are any categories of harm that are not included and should be considered as part of the OAP.

Figure 7: Taxonomy of harms in scope of the consultation

Legality/ Illegality Category of Harm Description
Consumer harms Harmful advertising content IIlegal content Adverts for illegal activities, products or services Adverts for illegal products such as drugs and weapons.Adverts that are prohibited in law (e.g. prescription medicines).Adverts that facilitate human trafficking and slavery, servitude and forced or compulsory labour.

-For example, adverts on Adult services websites (ASWs) that facilitate the Sexual Exploitation of individuals (both children and adults), who are trafficked (from overseas and within the UK) and advertised by individuals or organised criminal groups.
Consumer harms Harmful advertising content IIlegal content Malicious advertising Adverts that contain Javascript to force redirects or download payloads, enabling scams, cryptojacking or botnets. And/or adverts that cloak landing pages. In some cases, operated by sophisticated bad actors at a large scale.
Consumer harms Harmful advertising content IIlegal content Fraudulent advertising and counterfeiting Adverts seeking to defraud, such as breaches of the Financial Promotions regime, including investment frauds and fraudulent products and services such as fake ticketing through scam adverts.

Counterfeit fashion, cosmetic and pharmaceutical products are covered by separate legislation.
Consumer harms Harmful advertising content IIlegal content Fake Endorsements Use of celebrity images to promote products or scams without their consent.
Consumer harms Harmful advertising content IIlegal content Misleading adverts Adverts that include false claims or otherwise mislead.
Consumer harms Harmful advertising content IIlegal content Non identified adverts Paid-for influencer marketing that is not clearly identified as such.
Consumer harms Harmful advertising content Legal content Offensive adverts Adverts that involve harm or create offence but are not illegal
Consumer harms Harmful advertising content Legal content Adverts for products or services deemed to be harmful, but not illegal Adverts which have specific restrictions, e.g. adverts for HFSS products, gambling or alcohol, or adverts involving harmful depictions such as glamorising knives.
Consumer harms Harmful advertising content Legal content Adverts that are seen to contribute to body image concerns Adverts that may portray or present body types, cosmetic interventions or certain behaviours/lifestyles in an unhealthy way or way that creates undue pressure. Depending on the nature of the underlying issue, the ASA may categorise these as misleading, harmful or otherwise socially irresponsible adverts.
Consumer harms Harmful advertising targeting and placement   Mis-targeting Age-restricted adverts (such as for alcohol or gambling) delivered to media disproportionately popular with children.

Placement of advertising next to inappropriate or harmful content such as hate speech or digital piracy.
Consumer harms Harmful advertising targeting and placement   Discriminatory targeting Discrimination on the basis of age, ethnicity, gender, race or sexual orientation where this discrimination causes harm.
Consumer harms Harmful advertising targeting and placement   Targeting vulnerable people Targeting vulnerable audiences directly or by proxy, such as gambling adverts targeting individuals through marketing emails who have taken steps to self-exclude from receiving targeted gambling marketing communications.
Industry Harms Ad Fraud     Cyber criminals create fake traffic (such as using botnets to mimic real consumers), audience data, context or actions to syphon revenue from the display advertising ecosystem.
Industry Harms Brand safety including mis-targeting     Placement of advertising next to illegal, inappropriate or harmful content such as hate speech or digital piracy, or placement of inappropriate or harmful advertising next to legitimate content – damages advertiser and publisher brands and funds harmful content.

Adverts served to unintended or inappropriate audiences in a way that harms the advertiser/brand
Industry Harms Inaccurate Audience Measurement     There is currently no gold standard of audience measurement meaning advertisers and publishers are not empowered to have full awareness of how and where adverts are being served and whether they are achieving their campaign objectives (high sales of product or improved perception of their brand).

With limited independent verification due to walled garden operation models, advertisers have limited means beyond the data shared with them to understand how successful their campaigns have been, meaning tech companies may oversell figures in order to retain an investment.

Our aim through the OAP is to design a regulatory framework for online advertising that will be robust enough to respond to the spectrum of harms presented above. We are keen to explore measures, enforcement tools and options for regulatory reform that would address all advertising harms by addressing the underlying drivers of harm, with a specific focus on addressing the highest risk consumer harms. We envisage a system-wide approach that will be sufficiently flexible to future-proof against developments, and which reflects the amount of control the different actor(s) may have over content - rather than bring forward individual measures for each type of content. In addition to this, the framework needs to enable regulators to act (either proactively or reactively) at pace, with sanctions that incentivise compliance.

Consultation question 6

Do you agree that our taxonomy of harms covers the main types of harm found in online advertising, both in terms of the categories of harm as well as the main actors impacted by those harms?

a) Yes
b) No
c) Don’t know

Please explain your answer, indicating any types of harm, or actors impacted by the harm that we have not captured, as well as any evidence to support your answer.

3.3.1 Harms caused by the content of adverts

Illegal content

Illegal content, by its very nature, is underpinned by criminal intent. As outlined in our taxonomy of harms, this content may include advertising for fraudulent products or services, for illegal products such as weapons or drugs, or for illegal activities such as adverts that facilitate immigration crime, modern slavery or sexual exploitation.

Sexual exploitation online can involve individuals (both children and adults) trafficked (from overseas and within the UK) and then advertised on adult services websites (ASWs) by individuals and organised crime groups. ASWs are online user-generated paid-for advertising (in most settings) directories that provide a platform on which sex workers can legally advertise their services, including live-streaming. The majority of adverts for sexual services have moved online in recent years and are openly accessible through ASWs, which are popular websites in the UK.

Fraud has far reaching consequences, carrying not just economic but also psychological impacts. Consumers can suffer anxiety, distress and distrust when victimised by a fraudulent advert. Research published by the Money and Mental Health Policy Institute suggests that those who have experienced mental health problems are three times more likely to have fallen victim to an online scam than the wider population. In 2020, 4.6 million people with mental health problems across the UK were found to have been the victim of an online scam over the course of their lifetimes, and advertising is a popular way to reach them. When consumers are affected by adverts that are harmful to them, they may be unaware of how to raise a complaint, either with the relevant authority, or with the advertiser and supply chain. An illustrative example in relation to fraud is included at annex C.

It is clear that some bad actors in this space are incredibly sophisticated, especially in relation to fraudulent adverts, and are able to use the gaps in the regulatory system to spread their message at pace and on a vast scale. In addition to this, technological developments mean that they are able to adapt their illegal adverts in order to present new challenges to the market. For example, bad actors are able to disguise their illegal activity as legitimate in order to mislead their victims. The adverts consumers are shown may also not be easily identified by some users as adverts, where the advert is made to blend in with other content. This is prevalent in native advertising, the use of social media to promote products, search engines, advertorial content and other forms of online advertising. Not knowing what is a paid-for promotion increases the potential for consumers to be misled and could result in actions including purchase decisions they would otherwise not have made.

As it stands, there are strict rules around placing fraudulent advertising or advertising products such as drugs or weapons. However, much of advertising is self-regulatory, and many of the tools available to the frontline regulator do not provide strong incentives for the bad actors to comply. The layers of complexity and lack of transparency in the supply chain, as well as the low barriers to entry in the market, can be exploited by those intent on causing harm, enabling illegal advertising content to continue to be hosted online.

Misleading advertising

According to the ASA, over 70% of its work involves responding to and tackling concerns related to misleading advertising. The ASA is regarded as the “established means” for the investigation and resolution of complaints about misleading, aggressive or otherwise unfair business to consumer commercial practices in marketing contrary to the Consumer Protection from Unfair Trading Regulations 2008 (CPRs), and impermissible comparative advertising and misleading business to business marketing contrary to the Business Protection from Misleading Marketing Regulations 2008 (BPRs).

In the case of non-broadcast advertising, including online advertising, the ASA is able to refer cases of persistent or serious misleading advertising to its Trading Standards legal backstop. It has done so in a relatively small number of cases involving companies’ claims on their own websites or on their own social media accounts i.e. non-paid-for advertising online. The ASA has not had cause to refer cases involving paid-for adverts, because it has been able to secure amendments to or withdrawals of misleading online adverts through its own regulatory processes, which may involve recourse to sanctions.

The ASA does not process paid-for adverts that form part of fraudulent practices through its normal regulatory processes, in recognition that the criminal intent underlying these adverts are most immediately dealt with by companies in the advertising supply chain and, more generally, by UK and international law enforcement bodies. However, it is possible to report such ads via the ASA’s website, with the ASA using the intelligence gathered to alert a wide range of companies in the online advertising supply chain, which participate in its Scam Ad Alert scheme.

We know that misleading advertising is an issue which is important across the online advertising market. ASA insight suggests that misleading adverts created by or on behalf of legitimate companies, sole traders, and other organisations have been decreasing during the COVID-19 pandemic, whilst there is evidence that paid-for adverts that form part of fraudulent practices have been increasing over this period. Adverts with false claims, such as those about masks used to protect against COVID-19 or those spreading misinformation about vaccinations, are two examples. Such false information can often be placed alongside news articles.

Whilst there are no laws on misinformation, and the government does not wish to undermine freedom of speech, there are harms that arise from spreading misinformation. The World Health Organisation (WHO) has previously explained how misinformation ​​jeopardises measures to control the pandemic, ultimately costing lives.

Non-identified adverts

All advertising content must be obviously identifiable as such, and must be actively disclosed as advertising in scenarios where it is unclear to the audience that they are being advertised to (sold to, in effect). This principle operates across all advertising media and is grounded in consumer law, where Trading Standards and the CMA also have powers to act accordingly. Everyone involved in the supply chain has responsibility for clear ad disclosure. The ASA has a series of public rulings and public guidance, drawing out lessons from those rulings, that sets out what effective disclosure should look like.

The DCMS Select Committee inquiry into influencer culture has been examining the role of influencers and has raised concerns in relation to the disclosure of influencer advertising. The inquiry is exploring if regulation of advertising is sufficient to ensure it is clear what paid-for content is, and will consider the ability of current regulation to deal with this issue. As with other advertising content, influencers are required to disclose when they are advertising in scenarios where that fact is unclear to consumers.

Regulatory bodies such as the ASA have raised concerns.In a limited 2020 study focused on influencers who had previously been contacted regarding non-disclosure of advertising, the ASA found that only 35% of subsequent Instagram posts from 122 influencers included the appropriate disclosures.

Failure to disclose this information can be harmful to users, as it can give a false impression on the effectiveness of the services or products being advertised, as well as by not pushing users to think more critically about what they are seeing.

Legal but harmful content

This category is more complex given the wide range of content it can encompass, ranging from content associated with body image to adverts which may be considered offensive

In line with the UK Code of Non-Broadcast Advertising and Direct & Promotional Marketing (the ‘CAP Code’), the ASA administers an overarching rule that marketing communications ‘must be prepared with a sense of responsibility to consumers and to society’. This and numerous other rules in the CAP Code are designed to mitigate the potential for harm to arise from a wide range of marketing practices, with the objective of preventing harmful adverts from being published and, if they come to market, providing the basis for ASA enforcement action.

The ASA uses a range of intelligence-led, proactive and tech-based monitoring activities to identify non-compliant adverts online, in addition to acting on complaints raised by members of the public, companies and other organisations. The CAP Code also includes rules designed to mitigate the potential for harm to arise from adverts for particular categories of products, including alcohol; health treatments and cosmetic surgeries; body image; gambling; e-cigarettes; foods and soft drinks high in fat, salt or sugar; and motoring. An illustrative example in relation to body image is included at annex C.

Legal but harmful content is spread across the internet, with each group of people responding differently to the types of content presented. Individuals may see adverts because they are targeted in line with their personal data or browsing history, or because the advert is placed next to content that is relevant to the advert and therefore likely to be of interest to the audience.

A key focus has been on protecting children to ensure that adverts do not result in mental or physical harm, including in high-profile areas such as gambling advertising, which – under the CAP Code – attracts specific content and targeting restrictions to prevent ads from exploiting vulnerabilities. However, we know that there are other vulnerable groups of people, such as elderly people, or those suffering with addiction or mental health issues, where further action may be warranted beyond the rules in the CAP Code and specific legal powers available to statutory regulators which maintain formal working arrangements with the ASA.

Recent steps have been taken to improve consumer controls, such as privacy measures to turn off trackers. However, as is the case generally, online users have had limited ability to determine the adverts they are shown. Consumer action to filter which adverts are seen takes effort that is often outweighed by the benefits of accessing content immediately. The exception may be those that have experienced harm previously and have taken steps to report it. This overall lack of transparency and consumer ability to influence the market results in reduced incentives to address harmful adverts.

Consultation question 7

Do you agree that our above description of the harms faced by consumers or society cover the main harms that can be caused or exacerbated by the content of online advertising?

a) Yes
b) No
c) Don’t know

Please explain your answer, including any harms that are not covered in our description. This may include any evidence you can provide on the frequency and severity of the harms, trend data, and/or impacts on protected groups.

3.3.2 Harms caused by the placement and targeting of adverts

Placement of advertising

As well as rules for the content of advertising, the ASA applies rules on the media placement and audience targeting of advertising. Their rules assign to advertisers primary responsibility for compliance because advertisers are held to be primarily responsible for the content, media placement and audience targeting of their ads.

The ASA also places responsibility on others involved in preparing or publishing marketing communications, such as agencies, publishers and other service suppliers to abide by the CAP Code. This secondary responsibility recognises that whilst parties involved in preparing or publishing ads have a role to play in tackling irresponsible ads, there are limited circumstances in which online service providers are held by the ASA to exercise primary control over the creative content and audience targeting of adverts.

For broadcast media, where responsibility for compliance ultimately falls to the broadcaster, advertisers and broadcasters need to ensure age-restricted ads do not appear adjacent to programmes commissioned for, principally directed at or likely to appeal particularly to audiences below the age of 18.

For non-broadcast media, where a range of audience measurement metrics apply, the ASA applies rules that prohibit age-restricted ads from being directed at people under 18 through the selection of media or the context in which they appear, with clarification that no medium should be used to display these ads where more than 25% of the audience is measured to be under 18. This also extends to influencers online where their following may be made up of a high percentage of children. Concerns have been raised about the appropriate use and effectiveness of age-gating technologies online and the ability of existing audience measurement tools to accurately ensure age-restricted ads are not placed against online content that is likely to attract a disproportionately high child audience. Concerns have also been raised about exposure: adverts for restricted products may reach large numbers of children, even where 25% or more of the audience is not under 18, or where ads are not intentionally targeted at children.

From an advertiser’s perspective, responsible brands want to associate with content that embodies their values and ensure adverts are not exposed to an unintended audience or against content which is distasteful and jeopardises their brand safety. The Conscious Advertising Network (CAN) has a mission to protect brand safety through stopping advertising abuse and ensuring the supply chain uses good practice.[footnote 17] CAN states that there is a role for advertisers in defunding disinformation or misinformation and can use its market power to make sure that platforms and intermediaries are more accountable for the non-advertising content they host.[footnote 18]

Targeting

Online targeting can involve the use of data to personalise advertising to individuals that are more likely to be interested in the products and services being marketed. This can be achieved through use of an individual’s personal data, contextual data relating to their browsing preferences, or both. Online advertising that is targeted at groups that may be vulnerable because of certain characteristics or circumstances needs particular care so as not to exacerbate the degree of harm that may be experienced by those groups. For example, a gambling advert is more likely to be harmful when targeted towards individuals whose data indicates a history of addiction.

Technological innovation enables advertisers to target consumers with a high degree of specificity, for example, by perceived interests, age, salary or gender. Some of the inferred attributes used to target or segment consumers have historically been based on detailed and potentially intrusive perceived characteristics, and it is unlikely that this would always have been apparent or well understood by consumers. For example, there are well-known examples of retailers having targeted consumers based on an assumption of pregnancy inferred from their browsing behaviour.

In addition, where adverts are being directed at vulnerable groups, it follows that those outside those vulnerable groups may not be exposed to the same advertising. Online advertising is also transitory in nature, and so the same advert may not appear against the same content each time an individual visits the same site or piece of content. This makes maintaining oversight over who is seeing what advertising more difficult for regulators (and wider society) to understand, where some harmful adverts may not be spotted or reported due to their transitory nature, or where others may have been seen, but cannot be found again. For example, whilst the ASA is able to deal with the content, media placement and audience targeting of personalised adverts, it will likely defer to the ICO on the more fundamental issue of the legal bases on which data has been collated and processed to inform the targeting of adverts.

A key issue is educating consumers around targeting practices and how they can effectively manage these services. Consumer fatigue with consent requirements, in addition to a general lack of transparency on the use of data, can create an environment for the harmful misuse of targeting. In addition, the use of automated and complex technologies contributes to a lack of transparency and accountability in relation to the harms associated with targeted advertising.

The taxonomy sets out three categories of harm in relation to ad targeting:

  • Mis-targeting: age-restricted ads served to inappropriate audiences, such as those for alcohol or gambling delivered to media disproportionately popular with children.
  • Discriminatory targeting: discrimination on the basis of protected characteristics, where the discrimination causes harm.
  • Targeting of vulnerable people: targeting vulnerable audiences directly or by proxy, such as gambling ads targeting individuals who have taken steps to stop receiving targeted gambling marketing communications, or fraudulent ads targeting those with poor financial management records.

There is evidence to suggest that the targeting of certain adverts, including for specific types of products and services, can be harmful ​​to certain audiences. The charity Alcohol Change found that targeting (either purposefully or inadvertently) online advertising for alcoholic drinks towards those who were considered vulnerable can increase harm. Their research suggests that alcohol advertising is linked to children drinking at an earlier age and in a riskier way. They also surveyed a number of young people and demonstrated that 82% of young people recalled seeing an advert for alcohol in the last month, with 13% of 11-19 year olds engaging with these adverts.

The Centre for Data Ethics and Innovation (CDEI) published a report in 2020 on targeting which cited concerns around the impact of targeting on mental health and claimed that it may be a factor in ‘internet addiction’ as well as contributing to societal issues such as polarised political views or radicalisation.

Targeting can be a double-edged sword. It brings the benefits of more relevant advertising and is a valuable tool in protecting children and vulnerable audiences from inappropriate adverts by carving out certain groups, targeting these adverts away from the most vulnerable. But it also carries risks, particularly for children and vulnerable groups. Consumers have flagged that they sometimes feel they are being followed around the internet and they are not sure how their data is being used.

The increased accessibility of data use and targeting technologies also means criminals have an easy and anonymous tool to target specific sections of society, including vulnerable members, in order to perpetuate fraud. Criminal advertisers can potentially use data collected on users, such as demographic information, buying history and digital behaviours, to determine the ideal target audience. Data targeting means criminals can remotely carve out exactly which sections of society to expose to fraudulent advertising. Those in vulnerable positions may be hand-picked by scammers wishing to prey on their vulnerabilities.

There are regulations around targeting in certain areas. For example, the ASA applies rules that relate to a wide number of products and services linked to a variety of public health issues. Those rules operate with a focus on audience and placement restrictions, in order to minimise under-18s’ exposure to such ads; and creative content restrictions, to ensure that where under 18s do see these ads, the content does not appeal to them particularly. The ASA provides training and advice to target industries to help ensure ads that come to market comply with the rules.[footnote 19] The ASA is also currently reviewing guidance around age-restricted ads online to see whether it can be strengthened.

In relation to gambling adverts, the Gambling Commission operates concurrent rules requiring that gambling adverts must not be targeted at children or vulnerable people. There have been some developments in this area by the Gambling Commission (see case study below).

Case study

In 2020, the Gambling Commission issued an ‘AdTech Challenge’ to the industry to develop new measures to reduce the amount of advertising seen by children, young people and vulnerable adults online. The outcome of this challenge has led to a number of important commitments in the most recent version of the industry’s Code for Socially Responsible Advertising, including:

  • a requirement to target paid-for social media advertising to a 25+ audience only;
  • a requirement that operators will exclude individuals identified as being at risk of harm, or who have self-excluded from their paid-for social media campaigns; and
  • the adoption of a ‘negative keyword’ list to prevent adverts from appearing on keyword searches which indicate vulnerability (e.g. ‘how to stop gambling’).

The gambling industry trade and standards body, the Betting and Gaming Council, has also launched an ‘Ad Tech Forum’ bringing together gambling operators, advertising trade bodies and major tech platforms to drive future improvements. The government is also looking at the rules around - and impacts of - gambling advertising, wherever it appears, as part of its Review of the Gambling Act. The government remains alive to the disproportionate effects that exposure to advertising can have on those at risk of gambling-related harms and is committed to ruling out aggressive practices.

Industry has also been making some changes. For example, as mentioned previously, the Meta group has recently announced that it will be removing some of its targeting services (such as political affiliation, religion and sexual orientation). These sit alongside changes already outlined, such as the phasing out of cookies on Chrome and Apple’s new privacy default.

There are calls from consumer groups to strengthen existing rules to deal with the targeting and placement of alcohol adverts, and for advertisers of alcohol to be more transparent and to be held accountable when it comes to their campaign planning.

The CDEI’s review into online targeting suggested that platforms be required to host publicly accessible archives for online “opportunity” advertising (jobs, credit and housing) and adverts for age-restricted products, to enable greater scrutiny about how these adverts are targeted. They also recommended that the government consider how best to empower consumers and give them control over their data and how they are targeted. CDEI explained that this would need to be underpinned with an empowered regulatory framework which can deliver on their recommendations.

Consultation question 8

Do you agree that the above description of the harms faced by consumers or society cover the main harms that can be caused or exacerbated by the placement or targeting of online advertising?

a) Yes
b) No
c) Don’t know

Please explain your answer, including any harms that are not covered in our description. This may include any evidence you can provide on the frequency and severity of the harms, trend data, and/or impacts on protected groups.

3.3.3 Industry harms

The advertising industry is also exposed to forms of harm. Many of the harms outlined here also have manifestations for consumers. For instance, mis-targeting can have negative consequences for advertisers’ or publishers’ brand safety, but also have negative consequences for the consumers who were exposed to the mis-targeted adverts. It is important to recognise that in a lot of cases, the advertiser will employ agencies who have media buying power. This can often mean that the advertiser does not directly interact with the wider supply chain and does not always have full sight or control of where their adverts are placed. In this respect, agencies have a key role to play in protecting their clients (advertisers) from these harms. This section outlines some of the core harms to industry.

Brand safety

The Internet Advertising Bureau (IAB UK) defines brand safety as ‘keeping a brand’s reputation safe when advertising online’. Brands want to avoid being placed next to inappropriate content. The content and context of the advert is key to the success of the brand’s campaign. Advertisers worry that consumers will not want to associate with a company which is seen as ethically questionable, and therefore they need to protect their reputation and trust that their brand will be safe in terms of where it appears online. It is important to note that where advertising is placed alongside harmful editorial content, as well as harm to brands it also can monetise content that is directly harmful to consumers and society as a whole - for example, where that content is disseminating mis-information or dis-information. Conversely, the brand safety of legitimate publishers is at risk from the placement of harmful or inappropriate advertising.

Platforms and intermediaries, in turn, have a responsibility to ensure that their customers (advertisers) are being kept safe. It is important to note that each brand will have its own definition of what being ‘safe’ means to them. For example, an alcohol or tobacco manufacturer will have very different preferences from a children’s retailer. Advertisers need to seek information from intermediaries and platforms in order to understand who has seen their adverts, and it is uncertain as to whether the level of transparency received means it is easy for brands to use this information in a meaningful way.

Publishers and platforms with a reputation for keeping brands safe will attract reputable brands and continue to invest in doing so, though their reputation for brand safety may also to some extent be reliant on the transparency of the supply chain. By verifying the content, including other adverts on their platforms, these actors can go some way to protect current and future advertising customers. Other advertising intermediaries may attract a clientele who are less concerned about brand safety, or who may have an active interest in their advertising being placed against certain, less safe, content. Advertisers have argued there is not enough transparency across the supply chain to enable them to keep a clear sight of how, and to whom, their advertising is being disseminated.

Ad fraud

Generally the term ‘ad fraud’ refers to fraudulently representing advertising services that are either not delivered or delivered incorrectly. Essentially, it means that the advert is not properly delivered to the intended audience or location.

The most common type of ad fraud is the use of illegal bots which are automated and installed on the computers of consumers without their knowledge. The illegal bots then work in the background to generate artificial traffic to make false impressions, clicks etc. In other cases, intermediaries pass off low ‘cost per mille (thousand)’ (CPM) impressions (e.g. mobile display) as high CPM impressions (e.g. video). As a result, key performance indicators reported to advertisers may overstate the number of human impressions or misrepresent the value of impressions. Advertisers have asked for more visibility and transparency regarding the impressions received for their campaign, and measurement of ad fraud, but to date this has not been widely introduced.

Audience measurement

Understanding how well a campaign is reaching specific audiences is integral to the success of advertising. Ensuring that a campaign’s content is reaching the intended consumers (and not different consumers or illegal bots), and that this data is being captured and fed back, is a key aspect for any marketing strategy.

Limited transparency between the advertiser and the other parties in the supply chain is contributing to concerns over the harm that can be caused through inaccurate audience measurement. Where data is available, it is not always available in a standardised manner which is directly comparable across different intermediaries and platforms. If the advert reaches an unintended audience, it can cause harm to the advertiser, despite them not controlling the decision-making process that dictates which individuals are served adverts.

Advertisers need to trust that the technologies responsible for serving the adverts are accurate so that they do not become liable for a breach. More reliable and trustworthy audience measurement also brings benefits to publishers seeking to optimise their advertising revenue.

Inaccurate or unintended targeting

Advertisers have stated that harms related to the placement of advertising can be both intentional and unintentional, as a result of market effects, machine learning, AI, algorithms or a combination of these effects and technologies. The targeted nature of online adverts sets this medium of advertising apart from advertising across legacy media. Mis-placement or mis-targeting, whether deliberate or inadvertent, can be a key driver of consumer harm, as well as advertiser harm, particularly when ads for potentially harmful products or services are targeted at vulnerable consumers. For example, targeting people addicted to gambling with adverts for gambling products and services due to their online profile being linked to an interest in gambling products or services.

Platforms have options for removing specific adverts, and updating user preferences based on their circumstances. For example, Google has an advert personalisation service that users can use to limit the personalisation of advertising to them. Apple has introduced a similar initiative whereby it is increasing transparency about why particular ads are served to individuals and how they respect individual privacy. Part of Apple’s work is explaining to consumers how to turn these services off, and that this will not reduce the overall number of ads the consumer views, but will reduce the number of targeted ads they are served. However, the extent to which these services are known and understood by consumers is unknown, and requires upfront investment by consumers to actively take steps to control the level of targeting they are exposed to.

Consultation question 9

Do you agree with our description of the range of industry harms that can be caused by online advertising?

a) Yes
b) No
c) Don’t know

Please explain your answer, including any harms that are not covered in our description. This may include any evidence you can provide on the frequency and severity of the harms, or trend data.

4. The current self-regulatory framework

The current self-regulatory framework for online advertising (covering advertising content, media placement and audience targeting) primarily applies to advertisers, with secondary responsibility assigned to publishers, agencies, platforms and other intermediaries. This framework has a number of layers, with legal requirements on advertising applying both to specific product types and to marketing disciplines in general. These are incorporated into the UK Code of Non-Broadcast Advertising and Direct & Promotional Marketing (the ‘CAP Code’), which is administered by the independent Advertising Standards Authority (ASA). Alongside this framework for advertisers, there is a suite of industry standards that address other issues in online advertising and apply to other actors in the advertising supply chain.

It is important to recognise that advertising specific regulation and standards sit alongside a wider set of legislation and regulation. The most relevant to the online advertising market include consumer, competition and data protection rules.

4.1 The Advertising Standards Authority (ASA)

Since its inception in the 1960s, the content, media placement and audience targeting of adverts in the UK has been regulated by the independent ASA. The code-writing and arm’s-length financing elements of the ASA system are undertaken by self-regulatory bodies: the Committee of Advertising Practice (CAP) and the Advertising Standards Board of Finance (Asbof) respectively, working in co-regulation with Ofcom in relation to advertising on broadcast services, video-sharing platforms and on-demand media service providers.

The ASA maintains formal working arrangements with a range of statutory backstop regulators operating in certain discrete areas, such as the Gambling Commission for gambling services/products adverts, and Trading Standards for misleading advertising.

The ASA is responsible for the day-to-day enforcement of:

Adverts that appear on TV and radio services are pre-cleared before they are broadcast by two pre-clearance centres: Clearcast for television commercials and Radiocentre for radio adverts. By frontloading the system and ensuring effective due diligence ahead of the advert being visible to the public, broadcasters rarely face incidents of harmful types of advertising airing. Compliance with both of the ASA’s Codes is mandatory: advertisers, agencies and others that are not members of the trade associations and professional bodies that make up CAP and BCAP are still regulated by the ASA.

The Committee of Advertising Practice (CAP) is the industry committee responsible for writing and maintaining the CAP Code. CAP’s members include organisations that represent the advertising, sales promotion, direct marketing and media businesses, including online platforms. The CAP Code primarily applies to advertisers because they exercise primary control over the content, media placement and audience targeting of their adverts through their advertising and buying strategies (with help from the services of creative and media buying agencies).

Under Rule 1.8 of the CAP Code, an obligation is also placed on others involved in preparing or publishing marketing communications, such as agencies, publishers and other service suppliers, to abide by the Code. This secondary responsibility recognises that whilst parties involved in preparing or publishing adverts have a role to play in tackling irresponsible adverts, there are limited circumstances in which online service providers are held by the ASA to exercise primary control over the creative content and audience targeting of adverts.

Platforms and other businesses involved in the advertising supply chain work with the ASA in a variety of ways to uphold compliance with the CAP Code and to support the wider self-regulatory system. In practice, their working arrangements with the ASA vary considerably. The relatively longer-established, larger and more popular platforms generally have in place more sophisticated and comprehensive working arrangements with the ASA system, including more human resource dedicated to interacting with bodies exercising a public function, like the ASA.

The ASA’s working relationships with other platforms and networks are generally good, but interactions are typically more piecemeal and limited to tackling incidences of non-compliance that come to light. This inconsistency in working arrangements arises as much through circumstance as through the absence, to date, of a concerted and collective effort to standardise ASA expectations on these businesses.

Examples of how platforms and networks cooperate directly with the ASA and the wider self-regulatory system include (but are not limited to):

  • Advertising credits: providing advertising credits to fund ASA advertising campaigns on the platform’s network to improve awareness of the ASA, and to highlight persistent non-compliance of a website in circumstances where other enforcement is not available to the ASA.
  • Tackling persistent offenders: removing ads by, and sometimes the accounts of, advertisers that persistently refuse to comply with an ASA direction to amend or withdraw advertising found to be in breach.
  • Responding to information requests: providing information such as anonymised data relating to advertisers’ selection of audience targeting options.

You can find more examples of these types of cooperation on the ASA’s website.

4.1.1 Legal framework for online advertising

The ASA, and the self-regulatory system that writes the Codes and finances the UK’s frontline advertising regulation, aim to provide a simpler, more agile way of resolving breaches of restrictions on advertising, rather than pursuing civil litigation or criminal prosecution. The CAP Code has no statutory underpinning, and neither the CAP nor the ASA directly interpret or enforce the law. The self-regulatory system therefore operates within an overarching legal framework. The Code complements the law, in some places specifically referring to legislation, and in other places directly reflecting the law. To this end, the CAP Code serves to support marketers’ need to comply with the law. A non-exhaustive list of key legislation affecting marketing communications is available on the ASA’s website.[footnote 21] Similarly to the CAP Code, this list does not cover the full list of legal obligations that marketers should consider, and marketers are currently encouraged to seek legal advice where they are unsure of the obligations placed on them by the law.

The Code reflects provisions in legislation for particular goods and services (such as for medicines or tobacco), as well as being responsive to changes in society that have an advertising regulatory dimension. Areas of the Code are backstopped by other regulators. For example, rules that apply to on-demand providers (appendix 2 of the Code) and to VSP providers (appendix 3 of the Code) are backstopped by Ofcom.

Additionally, Trading Standards are the backstop regulator in the case of misleading, aggressive or otherwise unfair business to consumer non-broadcast advertising. The CMA, along with Trading Standards Services and other designated UK regulators, are able to enforce consumer protection legislation in relation to advertising, including misleading advertisements and other unfair practices by traders involved in online advertising – in particular, under the Consumer Protection from Unfair Trading Regulations 2008 (CPRs).

The CPRs will apply to any person - including advertisers, intermediaries and online platforms - where they act as a trader and are engaged in a commercial practice (as defined broadly by that legislation). Where they engage in a practice that is misleading, or where they otherwise fail to act in a professionally diligent manner, and this materially distorts or is likely to materially distort the economic behaviour of the average consumer with regards to a product, the trader will infringe the CPRs. For example, the CMA has previously taken action to:

  • address concerns that social media influencers were not declaring when they have been paid, or rewarded, to endorse goods or services online (resulting in undertakings from 16 influencers in January 2019).
  • require Facebook to do more to prevent hidden advertising being posted on its Instagram platforms, resulting in undertakings being given to the CMA in October 2020.

Furthermore, the Consumer Protection (Amendment) Regulations 2014 cover the additional private rights granted to consumers to enforce against some breaches of CPRs, especially in relation to misleading actions.

Other areas where a regulatory underpinning exists include (but are not limited to):

  • Gambling advertising As a condition of their licences, gambling operators and their affiliates in the GB market must abide by the advertising codes issued by BCAP and CAP, overseen by the ASA. If there are ads which are found to breach these codes, the ASA can – in cases of serious or repeated offence – refer them to the Gambling Commission or Ofcom (in the case of broadcasters).
    In addition, the gambling industry has its own gambling advertising code – the Gambling Industry Code for Socially Responsible Advertising - which includes additional requirements. The recent Gambling Act Review called for evidence on the potential benefits or harms of allowing licensed gambling operators to advertise, and the effectiveness of existing rules around advertising and marketing, wherever it appears. DCMS is considering the evidence carefully and will publish a White Paper outlining conclusions and next steps in the coming months.
  • Restrictions on advertising of food high in fat, sugar or salt (HFSS) Obesity is one of the largest public health challenges the UK faces. In conjunction with the Department of Health and Social Care, DCMS is introducing restrictions across TV, on-demand programme services and online for the advertising of less-healthy food and drink products. These restrictions are expected to come into force on 1 January 2023. The regulation of these restrictions will fall to Ofcom, which can then appoint a frontline regulator. The intention of these restrictions is to significantly reduce the number of adverts children are exposed to for less-healthy food and drink products.
  • Video-on-demand (VOD) regulation As with other digital platforms, advertising on UK-based on-demand streaming services must conform to the CAP Code, administered by the ASA. In addition, all UK-based On Demand Programme Services (ODPS), more generally referred to as VOD services, have additional rules which are enforceable by Ofcom, set out in Appendix 2 of the CAP Code. The ASA sets out that the marketer, not the media service provider, bears the primary responsibility for ensuring compliance with the CAP Code. However, providers of on-demand services, such as the media intermediary of a marketing communication, also accept an obligation to abide by the Code. CAP has produced dedicated guidance (Advertising in video-on-demand services) on the scope of the application of Appendix 2.
    As part of the new restrictions on food high in fat, sugar and salt currently being introduced by the Health and Care Bill, VOD services under the jurisdiction of the UK, and therefore regulated by Ofcom, are included in the 9pm TV watershed. Other VOD services are subject to the online restriction because they are not regulated by Ofcom. As part of the OAP, we will consider whether wider advertising rules for VOD should align with this model or follow the same standards as those that apply to online adverts.
  • Video-Sharing Platforms (VSPs) In addition to the CAP Code, new regulations that apply to Video-Sharing Platform (VSP) providers came into force in November 2020. Ofcom is responsible for the regulation of UK-established VSPs, including ensuring VSPs meet certain standards around advertising. In December 2021, Ofcom published its Regulatory framework for VSP advertising, including its designation of the ASA as the frontline regulator for VSP advertising and how Ofcom will work with the ASA to ensure these standards are met. It is the government’s intention that the requirements on UK-established VSPs under the VSP regulations are to be superseded by the forthcoming Online Safety Bill (OSB). The OSB will not include new provisions that apply to advertising for VSPs, but VSPs (as a subset of online platforms) will be considered within scope of the OAP.

4.1.2 The ASA’s approach to industry-led regulation

The UK’s advertising self-regulatory system is largely enabled through advertising industry businesses joining CAP member trade bodies, or operating through contractual agreements with media publishers and carriers, where businesses agree to comply with the Code so that marketing communications are legal, decent, honest and truthful.

In relation to its complaints and investigation function, the ASA typically receives complaints via their website which they are then able to investigate, but they can also launch proactive challenges. They review every complaint that is brought to their attention and will take further action if they feel a likely breach of the Codes has occurred. The ASA is regarded as the “established means” for the investigation and resolution of complaints about misleading, aggressive or otherwise unfair business to consumer commercial practices in marketing contrary to the Consumer Protection from Unfair Trading Regulations 2008 (CPRs), and impermissible comparative advertising and misleading business to business marketing contrary to the Business Protection from Misleading Marketing Regulations 2008 (BPRs). In addition, under contract with Ofcom, it accepts a formal role as the day-to-day regulator of ads on broadcast services, video-sharing platforms and on-demand media service providers.

More broadly, the ASA does not have a statutory power of investigation, and so is reliant on its own research as well as the voluntary cooperation of other players to provide information. In practice, a significant majority of advertisers do cooperate with the ASA and provide information in defence of a contested breach of the Codes. In addition, the ASA approach is in contrast to the enforcement of many legal provisions: the ASA reverses the burden of proof when it administers the CAP Code and BCAP Code, with the effect that it is for the advertiser to prove how it has complied with the Code, rather than for ASA to prosecute a breach. The ASA can and does take action against advertisers who have not voluntarily provided information to it, or even responded at all.

The ASA has specific complaints handling procedures for ads that promote fraudulent activity and, more recently, quick-reporting functions related to COVID-19 and misleading environmental claims. The ASA also increasingly identifies potential breaches of the CAP Code through intelligence-led, proactive tech-based monitoring and its development of data science applications. It has developed a number of such approaches in order to address the challenges presented by the scale and addressable nature of most online paid-for ads.[footnote 22]

The ASA advises that the vast majority of legitimate advertisers adhere to its Codes, and, where assurances are sought from advertisers to amend or withdraw ads found to be in breach, those assurances are generally kept and sanctions available under the self-regulatory system avoided. As a result, the ASA reports that it rarely needs to refer a breach to a backstop law enforcement regulator; and of the relatively small number of legitimate advertisers the ASA has referred to Trading Standards, none have related to paid-for online advertising.

The ASA is also committed to working within the boundaries of better regulation principles. This includes seeking to resolve issues informally with advertisers if strict criteria are met, to help meet the public interest of quickly amending or withdrawing ads that are very likely to be a breach of the CAP Code. This is often a preferred alternative to time- and resource-consuming investigations, although those remain an option in the case of repeated or serious misdemeanour.

In the case of non-broadcast advertising, the ASA can require the amendment or withdrawal of an advert that breaks the rules in the CAP Code, and the majority of sanctions available are coordinated through the CAP.[footnote 23]

These sanctions can include:

  • ’Naming and shaming’ of advertisers who have ignored the rules or caused widespread harm. The deterrent of ‘naming and shaming’, in the case of legitimate operators, can be effective in preventing both advertisers and agencies from deviating from the ASA rules administered by the ASA.
  • Denial of media space - through membership of CAP and/or contractual arrangements, publishers and platforms agree to deny media space to an advert that has been found to have breached the CAP Code; and in circumstances where the advertiser refuses to provide an assurance, to amend or withdraw the non-compliant ad.
  • Disqualification from awards - advertisers can be disqualified from industry awards so that they, and the agencies creating or placing the advert, are unable to be recognised for their work.
  • Preventing a paid-for advert) from appearing in search engine results - by agreement with the search engine.
  • Ad Alerts - CAP can issue Ad Alerts to its members, including the media, advising them to withhold services such as access to advertising space. These can be general (about ads in particular product categories) or specific (placed by specific advertisers).
  • Withdrawal of trading privileges - CAP members can revoke, withdraw or temporarily withhold recognition and trading privileges. For example, the Royal Mail can withdraw its bulk mail discount, which can make running direct marketing campaigns prohibitively expensive.
  • Pre-vetting - persistent or serious offenders can be required to have their marketing material vetted before publication. For example, CAP’s poster industry members can invoke mandatory pre-vetting for advertisers who have broken the CAP Code on grounds of taste and decency or social responsibility. Pre-vetting can last for two years.
  • On-Platform Targeted Ads - ASA adverts that identify parties found to be in breach of the CAP Code for misleading their audience. These are usually focused on would-be customers of businesses who make misleading claims on their own websites, or followers of influencers who are failing to disclose the advertising nature of their posts.

Currently, under the self-regulatory framework, responsibilities for advertising content and placement (apart from video-sharing platforms in the case of applicable advertising standards where they market, sell or arrange the advertising) primarily fall to advertisers.[footnote 24] Secondary responsibility is placed on others involved in preparing or publishing marketing communications, as explained above. However, there are limited circumstances in which online service providers are held by the ASA to exercise primary control over the creative content and audience targeting of ads. Where they do, the ASA is potentially able to apply the Codes in circumstances where a potential breach has occurred.

Nonetheless, in general advertisers rely on other parties in the system to place their advertising. Beyond specifying the types of audience and media they would like the advertising to be targeted at, they do not have full control and are in practice reliant on the wider supply chain to meet their targeting expectations. It is also the case that ‘bad actors, for example individuals or enterprises who are looking to disseminate illegal or other purposefully harmful content, will not be motivated to ​​comply with the ​​directions of the ASA or the self-regulatory sanctions it may deploy.

Addressing key issues online

In 2018, the ASA published its 2019-2023 strategy, ‘More Impact Online’. As part of this, the ASA has been increasing its efforts to proactively enforce advertising standards. For example, it has invested in new avatar technologies creating avatars that, to an extent, mimic their respective online audiences. The ASA has therefore been able to gain insight into the type of adverts that are being served to different age categories of children and to adults. It can use the data gathered by child avatars to identify Code breaches and take follow-up enforcement sanctions with advertisers and their agencies in circumstances where guidance is not being adhered to.[footnote 25]

The ASA and CAP also review and take action across key advertising segments, where evidence emerges or public concern builds. As an example, the ASA is currently reviewing advertising standards outlined in their codes on body image.The outcomes from a call for evidence by the ASA in this area will supplement the codes and ensure they are fulfilling their purpose. Whilst the ASA’s codes are not enshrined in law, this review mechanism demonstrates the CAP’s role in mirroring topics of concern within wider society, and ensuring their codes remain relevant to emerging trends and consumer concern.

Online Platform and Network Standards

The ASA and some of the world’s largest companies in the online advertising supply chain are currently exploring extending the ASA’s online advertising regulatory framework. The aim is to put on a formal footing - and bring consistency to - the ways in which these companies work with the ASA. In particular, such a framework would allow the ASA to hold these companies to account for how they help to promote and secure compliance with the CAP Code online, and is intended to serve as an effective complement to holding advertisers primarily responsible for the creative content and targeting of their ads. Together, the requirements on each represent a natural apportioning of responsibility, with the intention of delivering twin track regulation and strengthening consumer protection online.

We discuss the importance and the effectiveness of the OPNS in determining the approach we take to regulatory reform in the final chapter (chapter 6) of this consultation.

4.1.3 Funding for the ASA

The ASA is primarily funded by a voluntary levy paid by advertisers and collected predominantly through a levy on media buying agencies. This levy of 0.1% on the cost of buying advertising space, and 0.2% on direct mail, funds the ASA’s regulation of advertising. The levy is collected by the Advertising Standards Board of Finance (Asbof) and the Broadcast Advertising Standards Board of Finance (Basbof) at an appropriate arm’s length from the ASA.

In the last few years, the ASA has begun to receive contributions from some other players, including some online platforms such as Google and Meta, via multiple-year funding deals. It also receives a small income from its paid seminars and premium industry advice services.

This current industry-funded model means the ASA is independent from the government, and its regulation is not funded by the taxpayer. We note, and discuss in more detail in chapter 6, that any proposals to empower the ASA with further powers, sanctions and capacity would also require additional funding.

4.2 Wider regulation applying to online advertising

In addition to the regulation of online advertising through the ASA and complementary industry initiatives, there are a number of further regulations applying to advertising, most of which are broader digital regulations that sit alongside those that specifically apply to advertising.

4.2.1 Data protection standards

The Information Commissioner’s Office (ICO) regulates compliance with relevant data protection legislation. The Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) provide rules about ‘sending marketing and advertising by electronic means, such as by telephone, fax, email, text and picture or video message, or by using an automated calling system. PECR also includes other rules relating to cookies, telephone directories, traffic data, location data and security breaches’.

The Data Protection Act 2018 (DPA) has changed the rulings around direct marketing, including rules around GDPR. The DPA restricts the way organisations can carry out unsolicited direct marketing, i.e. marketing which has not been opted into or asked for. If a breach is found, the ICO will request that the company halts and removes their campaign within 28 days. However, due to the nature of the online programmatic trading desk, this can often be difficult to implement. For the most serious breaches of GDPR there are fines of up to £20 million or 4% of annual turnover, whichever is higher.[footnote 26] In relation to the OAP and consumer harms, there is concern that inadequate or outdated cookie laws give the legal framework underlying the ‘targeting’ based harms greater prominence.

The standards set by the ICO, and its ongoing work on ad tech, are detailed below.

  • ICO - Privacy Standards. On 25 November 2021, the ICO set out clear data protection standards in order for companies to safeguard consumer’s privacy online. Part of these standards outlines that companies should be able to justify that the use of personal data for online advertising is fair, proportionate and necessary. There is also a requirement for companies to provide accessible, clear information for consumers on how and why their information is being used, and to ensure choices and default settings are presented in a way that means users can make decisions within their interests. These measures will hold those in the supply chain (particularly ad tech companies) accountable for their data gathering and will empower consumers to exercise control over their information rights.
  • ICO - Adtech Investigation. Following a pause in order to tackle the issues associated with the pandemic, the ICO has picked up its work on investigating ad tech. The ICO is focussed on looking at the complexity and scale of real-time bidding (RTB) systems. At the moment, RTB systems use people’s personal data and the ICO wants to ensure that these systems are transparent, compliant with other data protection principles (e.g. fairness and lawfulness) and require explicit consent from the consumer, which is not currently the case.
  • ICO - Age Appropriate Design Code. This code sets out 15 standards of age appropriate design reflecting a risk-based approach. The aim of the code is to provide default settings which ensure children have the best access to online services, whilst also minimising the use and collection of their data. This code also goes further than other data protection guidelines by ensuring that should the child change their default settings, they are given the proper information and guidance before committing to doing so. The ICO is responsible for enforcing this code and breaches to the code can include warnings, ‘stop now’ orders and fines.

4.2.2 Financial promotions

In addition to the ASA, the Financial Conduct Authority (FCA) sets rules that authorised financial services firms must comply with when communicating or approving financial promotions, such as the requirement that they must be fair, clear and not misleading. The FCA and ASA work closely together to ensure that the most appropriate body is addressing complaints and taking action.

In the case that the FCA finds an advert that does not meet its rules, there are numerous steps it can take. Depending on the severity of the breach, it can request the firm which has communicated or approved the advert to withdraw it or amend it so that it complies with the FCA’s requirements. In the most serious cases, it can direct a firm to withdraw its financial promotion. The FCA may also ask firms to consider whether any customers may have acted on the non-compliant promotions and to take appropriate action to remedy any harm which consumers have suffered as a result. Lastly, the FCA can launch an enforcement investigation which may lead to a sanction, such as a financial penalty.

Unauthorised firms that wish to communicate an invitation or inducement to engage in a financial product or service (a financial promotion) are subject to the financial promotion restriction set out in the Financial Services and Markets Act 2000 (FSMA). This restriction is broad in scope and provides that a person must not, in the course of business, communicate an invitation or inducement to engage in investment activity or claims management activity unless they are an authorised person, the content of the communication has been approved by an authorised person, or an exemption applies. Communicating a financial promotion in breach of the restriction in FSMA is a criminal offence.

The FCA has powers to bring civil or criminal proceedings to stop the illegal promotion and obtain restitution on behalf of consumers who have lost monies as a result of relying on the illegal financial promotion. The FCA also issues alerts on its website and requests for websites and social media accounts which contain illegal financial promotions to be removed where appropriate.

As part of the UK’s departure from the EU, the UK government removed an exemption to the restriction on communicating financial promotions in FSMA for incoming electronic communications from the EU. The FCA has indicated that it has been considering the implications of this change in terms of the application of the financial promotion restriction to online platforms.

In this context, Google has put in place a new financial services verification policy to ensure financial promotions hosted on its website are only made by firms authorised by the UK financial services regulators. Other tech firms (including Microsoft, Meta and Twitter) have announced they intend to put in place similar policies in due course.

4.3 Industry initiatives

In addition to the role of the ASA, there are a range of self-regulatory and wider industry initiatives which we would like to highlight as constructive means of addressing harms and drivers of harm in online advertising. In highlighting industry approaches we have focused on initiatives that bring industry together to address drivers of harm - rather than the steps taken by individual businesses. We recognise that not all players have equal capacity and resources to act and do not have the same market power, either socially or financially.

The below section summarises good practice that has been implemented across parts of the online advertising supply chain and what they aim to address in the online ecosystem. The 2020 Plum report also looked at a range of industry and regulatory initiatives that have been contributing to the mitigation of harmful issues such as inappropriate ad content, inappropriate ad targeting, ad fraud and brand safety risk.[footnote 27]

We welcome views from respondents on whether this section provides an overview of the main industry initiatives, as well as their efficacy in addressing the issues they are designed to address.

Verification requirements

Some players within the online advertising ecosystem require advertisers to be verified with relevant authorities. Some organisations are making advances in this area. For example, Google has recently changed its online advertiser onboarding process, introducing an advertiser identity verification program, requiring advertisers (organisations and individuals) to provide documentation proving their identity. It has also amended its advertising policy on financial services to allow ads that relate to financial services or products only if the person making the ad is authorised by the FCA or if the ad has been approved by an authorised person.

Members of the Online Fraud Steering Group (OFSG) have also revised advertiser onboarding processes that requires anyone who wishes to publish an advert relating to UK-regulated financial services to be authorised by the FCA. For example, Amazon Ads requires in-scope advertisers to confirm they are authorised (or qualify for an appropriate exemption) by the FCA.

Identity transparency in the supply chain

Third-party intermediaries have a range of initiatives that use publicly accessible files or ledgers to verify key players (e.g. exchanges, final sellers of an ad impression, SSPs and advertisers) and that prevent malvertising and bad actors posing as legitimate brands. Advertisers are able to trace the path taken to where they are hosted, for example Ads.txt, Sellers.json (+Supply Chain Object) and Buyers.json (+Demand Chain Object).

The aim of these services is to increase transparency and demonstrate which intermediaries the advert has been through. The government recognises the progress and promise of these initiatives in increasing accountability and transparency across the supply chain. We also commend and support those campaigns to encourage the adoption of these tools and initiatives more commonplace across the ecosystem - recognising they are voluntary measures that actors in the supply chain can choose to be a part of.

Industry standards and examples of best practice

There are a number of industry standards and best practices designed to reduce some of the issues that arise from the opacity of the supply chain such as malware, ad fraud, brand safety and ad misplacement. These include the Internet Advertising Bureau’s (IAB UK) Gold Standard, and the Trustworthy Accountability Group’s (TAG) certification programmes, including Certified Against Malware, Certified Against Fraud and Brand Safety Certified. The government supports and encourages the adoption of these standards and looks to support further development of standards that aim to address how supply chain opacity can contribute to consumer harms.

In addition, we recognise and encourage those firms that are members of groups aimed at addressing harms associated with online advertising, such as TAG, the Online Fraud Steering Group (OFSG) and Stop Scams UK. The government is keen to see these collaborative initiatives and standards adopted and enforced as widely as possible. A possible measure, building on these initiatives, could be for platforms and publishers only to accept advertisers which have these memberships, in order for these to become an industry standard.

Global Alliance for Responsible Media

The Global Alliance for Responsible Media (GARM) is a cross-industry initiative established by the World Federation of Advertisers to address the challenge of harmful content on digital media platforms and its monetisation via advertising. GARM is seeking to provide a universal definition for harmful online content, arguing that this issue cannot be meaningfully discussed unless the language around it is consistent and understandable. It is taking steps to create a safer digital media environment by preventing the monetisation of harmful online content. In addition to this, GARM wants to create transparency for industry on where sensitive information is likely to be displayed so that marketers and advertisers can protect consumer safety and act responsibly.

Technological and intelligence-led tools

A number of firms use machine learning and Artificial Intelligence to proactively review the content of adverts and test for threats, malware and concerning patterns. These technological and intelligence-led solutions are sometimes combined with human intelligence, which can be effective in identifying and preventing harms. Government supports such approaches to sophisticated technological problems[footnote 28] and we are keen to consult on how to further support the development and industry-wide use of more of these initiatives, while recognising the different circumstances and capacities of different firms.

Project Origin

Project Origin is a cross-media audience measurement programme designed by ISBA. The aim of Project Origin is to address the need of advertisers to plan campaigns across digital and broadcast - and is born out of the lack of an available, standardised approach to measuring data. ISBA has been working globally with the World Federation of Advertisers to convene parties across the advertising supply chain to meet the needs of advertisers. It is based on the four principles identified by advertisers from the buyers’ perspective: Governance, Standards and Metrics, Privacy, and Technical Infrastructure Pipework.

4.3.1 Consumer tools and campaigns

Consumer reporting processes and reactive takedowns

The ASA, platforms and publishers have a range of consumer reporting processes in place, with consumer groups sharing survey results that demonstrate the awareness of these reporting processes and rate of takedown. The extent to which these services are effective is relatively unclear. Consumers need to feel that their complaints will be acted upon. We also know that due to the complexity of the programmatic advertising space, it is often hard to retrospectively track down ‘bad’ adverts, as well as ascertaining who saw what and when. This has consequences for enforcement and consumer trust in this process. The government recognises the efforts and advances in this space and would like to support work to standardise and improve upon existing consumer reporting and takedown processes.

Awareness campaigns and consumer support

The government would like to commend and accelerate work being undertaken with consumer groups and awareness campaigns underway to address relevant harms - such as work with Citizens Advice, the Take Five campaign led by UK Finance, the Be Real campaign’s Body Image Pledge, the Advertising Association’s Media Smart programme, the NCSC’s Cyber Aware programme and the ICO’s Be Data Aware resources.

In addition to these organisations, the government also recognises the work carried out by GAMSTOP, which enables customers to exclude themselves from all online operators licensed to provide gambling facilities in the UK, as well as signposting to other support.

In addition to this, there is significant work being done to increase media literacy. Educational work will enable consumers to become increasingly conscious of harmful ads.

Consumer tools and services

Firms offer a range of service settings and features, operating systems and browser ad controls, as well as ad blockers. We commend the offering of these tools and look to support and work with industry to facilitate more widespread facilitation and awareness surrounding these tools and services.

Ad disclosure tools and practices

In terms of advertisers, we welcome those who have clear processes in place for the disclosure of paid-for advertising and who monitor for compliance with these protocols. We recognise that the work, in particular the ASA, has done to ensure influencers are made aware of the requirements for them to notify their followers when they have been paid to post using the #ad or #gifted hashtags.

Consultation question 10

Do you agree that we have accurately captured the main industry initiatives, consumer tools and campaigns designed to improve transparency and accountability in online advertising?

a) Yes
b) No
c) Don’t know

Please explain your answer, including reference to any further industry initiatives, consumer tools or campaigns that we should be aware of.

Consultation question 11

Should advertising for VoD closer align to broadcasting standards or follow the same standards as those that apply to online?

a) Broadcasting
b) Online

Please explain your answer.

5. Rationale for intervention

The call for evidence and research commissioned by DCMS over recent years recognises that there are a range of harms that can be attributed to online advertising. The rapid growth of the online advertising industry has meant that protections for consumers (and advertisers) have not kept pace and stakeholders have called for significant regulatory reform due to the inadequacy of the current regulatory system.

We have developed a taxonomy of harms (shown in figure 7), which suggests that online advertising harms can largely be divided into two key areas:

(a) the content of adverts; and (b) the targeting or placement of adverts.

The combination of harmful content and targeting can be especially harmful for some vulnerable groups.

In relation to the content of adverts, these harms can be separated into legal and illegal harms. The category of legal but harmful content is complex given the wide range of content it can encompass, including body image concerns as well as age restricted products and services. Illegal content may include advertising for fraudulent products or services, for illegal products such as weapons or drugs, for illegal activities such as adverts that facilitate immigration crime, modern slavery or sexual exploitation, as well as issues such as fake endorsement. The biggest concern raised by respondents to the call for evidence related to fraudulent advertising (including scams), with others citing concerns around legal but harmful advertising themes in advertising, such as body image.

Harm can also be caused by the targeting and placement of adverts, where advertising supply chain businesses aim to maximise their profits by providing a cost efficient system to match advertisers with users. Technological innovation enables advertisers to target consumers with a high degree of specificity, for example, through data that has been knowingly volunteered (e.g. age and gender on a social media platform), as well as inferred (e.g. perceived interests and salary). Specific vulnerabilities in users can also be exploited to target adverts, including those that are possibly harmful. Any measures to address harmful adverts incur costs and unless all competitors in the market face these costs, there is no incentive for a single business to act further. This is a coordination failure which in part stems from the fact that the current ASA regulation primarily covers advertisers - who as consumer-facing businesses are often driven by reputational incentives - rather than all businesses across the supply chain. This creates a regulatory gap which means that advertisers could effectively currently be held accountable for the role played by other actors in relation to placement and targeting of their adverts to consumers.

The advertising industry is also exposed to forms of harm, many of which also have manifestations for consumers. For instance, mis-targeting can have negative consequences for advertisers’ or publishers’ brand safety, but also have negative consequences for the consumers who were exposed to the mis-targeted adverts. Due to the intricacy and opacity of the supply chain, and commercial sensitivity over data advertisers face imperfect knowledge about how to maximise engagement with the target audience. The advertiser partly relies on the supply chain to deliver adverts to the right audience in order to comply with the advertising code and may have very little oversight of the processes undertaken - and data used - to match audiences to relevant advertising. Advertisers often face hidden information on how their adverts are delivered through the supply chain, and therefore safety and reduction of harm in relation to targeting is not always factored into advertisers decisions, or fully within their control.

An opaque supply chain and low barriers to entry allow bad actors to enter the system undetected and unmonitored, meaning there is a lack of visibility over the process to publish harmful adverts. As well as being a challenge in terms of distinguishing the entrance routes for bad actors, a lack of transparency creates significant issues in terms of collecting data and being able to fully grasp the scale of the problem or empower an effective enforcement regime.

Due to the nature of the self-regulatory system, a high level of harmful activity could result from a low chance of repercussions as the supply chain has limited incentive to refer such activity to authorities due to the costs, and the lack of an existing mechanism by which to do so.

Currently, responsibilities for online advertising content and placement (apart from, in particular circumstances, the case of video-sharing platforms) primarily fall to advertisers, although secondary responsibility is placed on others involved in preparing or publishing marketing communications. There are limited circumstances in which these online service providers are held by the ASA to exercise primary control over the creative content and audience targeting of ads. Furthermore, the ASA approach is in contrast to the enforcement of many legal provisions: ASA reverses the burden of proof when it administers the CAP and BCAP Codes, with the effect that it is for the advertiser to prove how it has complied with the Code, rather than for the ASA to gather evidence and prosecute a breach.

There is no central body which collects and disseminates data on the quantity of harms, the type of harms or how they are handled when identified. As a result, platforms and intermediaries are not sharing this information and are not held directly accountable for their role in disseminating harms, and non-statutory regulators currently operating in the system have no formal investigatory powers available to them to better understand the picture. This is a challenge for industry as it is reliant on individual platforms and publishers being forthcoming with this information, which has historically not been the case. The evidence gap is one which is stark and this layered, opaque approach to monitoring harms reflects the inconsistent and unbalanced nature of online transparency when it comes to data. Information gathering is therefore an important factor when considering a holistic review of the regulatory framework. It is likely that better flows of information would help regulators understand the picture more effectively.

Conversely, when consumers are affected by adverts that are harmful to them, it can be unclear how to raise a complaint, either with the relevant authority, or with the advertiser and/or actors in the supply chain. The current enforcement system relies on reacting to complaints to take action, coupled with a range of proactive measures, but a comprehensive, holistic system is not in place to investigate all adverts that are intentionally causing harm. It is also extremely limited in its ability to address complaints on personalised adverts where they have been served to a restricted group, or those that are more vulnerable to the nature of the content, for example vulnerable individuals seeing a gambling advert.

A lack of transparency and accountability in the supply chain can, combined with incentives in the market, lead to insufficient preventative action. This in turn reduces the trust of internet users and ultimately impacts the sustainable growth of the online advertising industry. Addressing these drivers of harm, by strengthening the regulatory framework for online advertising, will contribute to increasing consumer trust in online advertising, which will support its long-term sustainability for both advertisers, intermediaries, platforms and publishers. This may also create opportunities for innovation and growth within the online advertising sector.

Through increasing transparency across the supply chain, our aim is to unlock innovative new opportunities for firms. In exploring how to introduce more proportionate accountability for actors within scope, we look to work in tandem with efforts to make the online advertising ecosystem a competitive landscape where existing and new entrants can thrive.

Consultation question 12

To what extent do you agree with our rationale for intervention, in particular that a lack of transparency and accountability in online advertising are the main drivers of harm found in online advertising content, placement, targeting, and industry harm?

a) Strongly agree
b) Somewhat agree
c) Neither agree nor disagree
d) Somewhat disagree
e) Strongly disagree

Please explain your answer.

6. Options for regulatory reform

6.1 Building on the current regulatory landscape to increase accountability and transparency across the supply chain

Our rationale for intervention sets out the need to address harms in online advertising caused by the key drivers we identify as transparency and accountability. The ASA is taking innovative approaches to drive compliance and reduce harms, including using machine learning to more proactively understand harm and take action. In some cases it can also call on the support of statutory enforcement bodies where it cannot itself remedy non-compliance. However, as a non-statutory regulator they do not have statutory powers itself, which would increase the regulator’s ability and resources to undertake proactive investigations, to collect all necessary data from actors in the supply chain described in chapter 1, or to enforce compliance of the rules with advertisers and across the supply chain.

We outline in this section options the government, regulators and industry could take to improve transparency and accountability in the system, reduce harms and increase trust to ensure a sustainable online advertising market. Our assessment is that any framework will need to spread accountability across the supply chain, rather than rely on holding advertisers primarily responsible for the dissemination of harmful content and potentially harmful use of targeting.

We are keen, where it is possible, to build on the existing self-regulatory framework, which, for advertisers, is focused around the role of the Advertising Standards Authority (ASA) and the codes it oversees. By building on the existing framework, where it meets our objectives to reduce harm, we can minimise transition costs and ensure any changes to the regulatory framework can take effect sooner.

The ASA’s development of their Online Platforms and Network Standards (OPNS) may go some way to addressing this, by holding intermediaries and platforms responsible for their part in ensuring the CAP code for advertisers can be effectively overseen. However, as OPNS remains under development, the detail of what this will cover or the level of uptake from platforms and intermediaries that will be achieved is not yet known. If OPNS is in place this year, the government intends to take into account in its response to the consultation the extent to which it improves accountability and transparency compared to the measures we propose in this chapter to achieve the same aims.

The below sections first set out a range of options for regulatory oversight, followed by consideration of the range of measures that could be introduced across each of the actors, starting with advertisers. We then go on to examine how new obligations for online intermediaries, platforms and publishers could complement this.

We are keen for any government interventions to be targeted within the supply chain to achieve the greatest impact and leverage, and for the activities required by different actors to complement each other to ensure we create a coherent overarching framework.

6.1.1 Addressing the key drivers of harm

To address the harms caused by the market failures outlined in the previous chapters of this document, we need a regulatory system that promotes transparency and accountability across the online ad ecosystem. Designing a sustainable, future proof regulatory framework is vital. We know that the online advertising supply chain is sophisticated and complex, and as such it must be fully equipped and able to keep up with modern technological advancements. Effective and proportionate information sharing among industry is central to the proactive reduction and prevention of harms.

We envisage increasing transparency through duties relating to information sharing that could be placed on relevant parties across the supply chain, which may include duties in relation to transparency reporting. These can be broken down into a few different categories:

  • Transparency to the regulator: Duties on what information publishers, platforms and intermediaries should share on a (i) regular basis, and (ii) on request, to promote transparency to the regulator.
  • Transparency across the supply chain: Standards for the reporting of online viewing to internet users and all parties operating in the supply chain, which can also be used by regulators to ensure the exposure of inappropriate audiences to particular types of advertising is minimised.
  • Transparency to the public: Annual/regular reporting on problematic ads and action taken to promote transparency to the public. Increased use of approaches such as advertising archives and libraries so it is possible to understand the extent of adverts in use, particularly in relation to high-risk advertising products and services, such as those which are age restricted.

Increasing accountability across all parts of the ecosystem will be vital to incentivising businesses to act responsibly, in accordance with their operating models. It will also help tackle bad actors who have no incentive to comply with the rules governing the regulatory regime, as well as legitimate actors who have greater incentives to comply. We envisage achieving this through:

  • A proportionate, robust and well-funded regulatory system that is able to rapidly address breaches of codes or standards as they arise, and prevent further breaches from occurring.
  • A strengthened range of sanctioning powers to enable responsible party(s) to be held to account, for example, through the ability to impose fines or sanctions to organisations in breach of the rules.
  • A straightforward and coherent system for regulation of online advertising.
  • An easy to use and efficient complaints system complemented with powers for a regulator to undertake proactive research to better understand harmful types of advertising, and take action in response.
  • Ensuring all parties across the supply chain are brought into the regulatory model and committed to making it work. This includes proportionate responsibilities for platforms, intermediaries and publishers for the harm caused from ads displayed or placed through their systems.

We want to ensure that the OAP empowers regulators to do the investigative work required to assess the prevalence of - and address - harmful advertising and bad actors. Through developing a policy which opens channels of communication, and incentivises information sharing, the industry can build a fuller picture of harms and take a cross-industry approach to tackling these issues.

6.1.2 Levels of regulatory oversight that could be applied across the supply chain

The powers that regulators have to undertake their regulatory functions make a significant difference to the way that a regulatory system operates and how effective it is. There are a range of options that could be employed with regards to the level of regulatory oversight the regulator is given to oversee the online advertising market, from the current self-regulatory system through to a full statutory system. These options could be implemented to oversee both the existing and any new rules on actors across the supply chain, and different levels could apply to different actors.

Decisions on the level of regulatory oversight will affect the ability of any regulator to develop rules, monitor, and enforce compliance with the framework. Our focus will be on ensuring that any new system comes together to create a coherent overarching framework - and that it is fair, proportionate and non-duplicative across the actors involved.

We are keen that the approach taken would provide regulators with the right tools and information to effectively assess harm and act accordingly. Further evidence on the scale and severity of the harms taking place could be developed by tackling the lack of transparency and monitoring in the market. This better flow of information in turn would enable regulators to then make further decisions on appropriate regulation and sanctions where appropriate.

Option 1: Self-regulatory approach

A self-regulatory approach would involve relying on the ASA’s existing regulation through the CAP code (including where these are already backstopped by legislation and related statutory enforcement bodies) and the ASA’s new OPNS proposal.

  • For advertisers, the approach would largely be in line with the current regulatory system, so this is the current baseline.
  • For intermediaries, publishers and platforms, new requirements are likely to be introduced by the ASA as a result of the OPNS, though it is not currently known what shape these may take or what firms will be in scope. Early conversations have indicated the OPNS proposal is likely to consider larger publishers, platforms and large intermediary companies providing services to facilitate the matching of ads to online inventory. As such, it is difficult for the government to provide a cost analysis of the risks and benefits to business at this stage. However, no matter the approach that is followed, platforms will be subject to the standalone duty that the forthcoming Online Safety Bill will introduce to tackle fraudulent paid-for advertising.[footnote 29]

OPNS will need to ensure relevant market players (other than advertisers) are appropriately held to account, with effective regulation of the advertising codes across the online advertising ecosystem. While the OPNS will look to hold a broader range of actors to account, this will still be a part of the self-regulatory framework and sanctions and other powers of enforcement will be limited. A key consideration here would be how the new OPNS code interacts with the existing CAP Code and whether, collectively, the government considers them to go far enough to achieve the aims of the Online Advertising Programme by lowering the risk of harm and increasing trust amongst consumers.

The current ASA system is reliant on sanctions which are most often effective at incentivising legitimate actors to comply with the Codes. However, without harder-edged sanctions, this system is less likely to deal effectively with illegal activities such as fraud. It is also not clear that an industry-led approach reliant on reputational sanctions will be effective when applied to non-consumer facing actors in the supply chain, such as many third party intermediaries. In order for all actors to adhere to information-sharing measures and requirements to standardise reporting, and to identify bad actors in the supply chain, we may need regulators with the capability to audit, gather information or sanction organisations who do not comply.

A self-regulatory approach should have the benefit of having the lowest transition and regulatory costs. This will need to be weighed against whether a self-regulatory approach can deliver the benefits of reducing harm and increasing trust through improving transparency and accountability, without statutory powers being available. Concerns exist surrounding the self-regulatory framework’s ability to address illegal harms and hold intermediaries, platforms and publishers sufficiently to account.

Funding in a self-regulatory approach would continue to be voluntary - at present the ASA raises this through a voluntary levy on advertising spend, mainly collected by large media buying agencies. A key concern in recent years has been the sustainability of funding for the ASA - especially as its funding is linked and therefore exposed to the overall health and structural developments of the advertising market. Given the complexity, scale and speed of online advertising there is also a question about whether current funding levels are providing the ASA with adequate capacity to both manage reactive complaints, but also invest fully in the more proactive work they are increasingly undertaking to assess harms and take action.

We are particularly keen to understand from respondents their view of the effectiveness of the ASA system as it is currently set up, including whether it has the right powers, resources (including funding), and regulatory approach at its disposal to successfully address the breadth and scale of harms. We are keen to understand respondents’ views in relation to both the ASA’s efficacy on areas already covered in the CAP Code, as well as how appropriate or effective it would be for them to oversee any new measures, for example for illegal harms, or responsibilities on a wider set of actors beyond advertisers, that may not currently be fully covered in the CAP Code.

Level of regulatory oversight Description Regulatory instruments available to ensure compliance
Option 1: Self-regulatory approach The ASA would continue to be the regulator for advertisers through the CAP code, and for those platforms and intermediaries in scope, through their new OPNS codes. For regulating the CAP code, the ASA has a range of sanctions, from the reputational consequences of published rulings, to the withdrawal of media space, which, aside from some areas that are back-stopped (such as for by Trading Standards for misleading advertising), are not underpinned by law. These include:

-Promotion of adjudications

-Search-engine optimised publication of rulings

-Denial of third party media space-Preventing an advert from appearing in search engine results (in partnership with the search engine)

-Use of ASA ads in paid-for space online

-Withdrawal of trade body privileges-Pre-vetting (lasts for two years and could require persistent offenders to have their marketing material vetted before publication)

-Disqualification from advertising industry awards

It is not yet clear which sanctions will be used by the ASA to drive compliance with the OPNS. Although it has working arrangements in place with statutory enforcement bodies in some areas, the ASA itself does not currently have powers to enforce compliance through tougher statutory sanctions for repeat offenders, criminal actors who create illegal content, or those who refuse to comply. Funding remains based on voluntary levy collection.

Option 2: Introducing a statutory regulator to backstop more fully the self-regulatory approach

A statutory backstop is an alternative option, under which the ASA would continue as the frontline regulator, but backstopped more fully by a newly-appointed statutory regulator, to provide stronger powers of enforcement where needed. While statutory backstops such as the Gambling Commission or Buckinghamshire and Surrey Trading Standards already operate in discrete areas of the CAP Code, this option would relate to appointing a new statutory regulator to provide further powers of enforcement. This could, for example, be necessary to address both the presence of bad actors in the ecosystem and illegal harms, or when the sanctions available to the ASA do not go far enough to ensure compliance. Depending on the form and shape of the OPNS, we may therefore seek to backstop or make statutory some of the obligations placed on the actors in the wider supply chain.

The new statutory backstop could also have additional powers of enforcement to regulate illegal harms, bad actors, repeat offenders or in-scope firms, such as to gather information, ban or fine. They could also enforce either the existing self-regulatory codes placed onto a statutory basis, or any new statutory rules. Development of any new codes by the ASA would require agreement from the statutory backstop. An example of this form of co-regulation is the CAP Code for on-demand programme services (ODPS), where the elements relating to ODPS in the Code are underpinned by statute and so enforceable by Ofcom; whilst other parts are not, as they are self-regulatory.

A key consideration for a statutory regulator acting as a backstop to a frontline regulator is the threshold for when they would intervene. This is likely to be in relation to serious or repeated breaches or where regulatory mechanisms available to the frontline regulator have failed to satisfactorily tackle the non-compliance. This approach has the benefit of increasing rates of compliance with the rules, where the frontline regulator is not able to enforce them. The key evidence of the need for a backstop regulator is the current self-regulatory framework’s challenges in dealing with illegal harms and actors. Stronger powers of enforcement may be needed to sanction and deter illegal actors.

Under this option there are key permutations on which we would welcome feedback from respondents:

  • Permutation 1: backstop both the CAP code for online activities, and the new OPNS code for actors across the wider supply chain. The former would need to be decoupled from the broader CAP Code, given it applies much more widely to non-digital forms of advertising, and could take the form of a new Online CAP or ‘OCAP’ Code. A key consideration would be the practicalities of consolidating such a code under a single back-stop regulator, rather than relying, as the CAP Code does now, on multiple backstops in discrete areas.
  • Permutation 2: backstop OPNS Code for actors across the wider supply chain as well as high-risk advertising segments of the CAP Code where these are not already subject to a regulatory backstop, but where the risks around harm suggest greater powers for the regulator may be appropriate. This could include tougher rules for illegal advertising online, harmful but legal segments, and/or rules about harmful advertising targeting.
  • Permutation 3: backstop OPNS only. This would leave the CAP Code as it is, bringing in a regulatory backstop to enforce a new Code (like the OPNS which ASA are currently designing) that draws in actors such as platforms, publishers and intermediaries into the ASA system. In this scenario, whilst the OPNS standards could be developed by the ASA, the statutory backstop would need to approve its contents (in the same way as it currently does with the BCAP Code).

Under each of the above permutations, we would be required to place relevant parts of the Code onto a statutory footing through legislation.

Our assessment of OPNS will also be an important factor in deciding whether a self-regulatory system is sufficient or if the ASA is likely to require further powers or a statutory regulator to backstop some or all aspects of the codes. If the existing self-regulatory Codes, and those developed as part of OPNS, do not include sufficient measures for prevention, information gathering, monitoring as well as action on both legal and illegal harms, then a statutory backstop may be necessary. A backstopped OPNS would also require the Code to be approved by the backstop regulator, which also provides a level of independent oversight to ensure the rules are sufficiently ambitious.

Level of regulatory oversight Description Regulatory instruments available to ensure compliance
Option 2: Introducing a statutory regulator to backstop more fully the self-regulatory approach A statutory regulator acting as a backstop would have powers to enforce the self-regulatory codes in existence or in development (CAP and / or OPNS) through tougher sanctions on actors who do not comply. This would involve a co-regulation arrangement in which the regulator could delegate the writing and maintenance of codes to the industry self-regulator, with approval required for any code changes. In the case of a statutory regulator backstopping the self-regulatory code, regulatory sanctions would likely only be used in certain circumstances (e.g. repeat or serious offenders or those involved in disseminating illegal harms such as fraud). It may also be possible to assign some information gathering powers to the backstop regulator enabling it to collect data and keep a strategic view of advertising harm.

Option 3: Full statutory approach

Finally, a full statutory approach would involve appointing a statutory regulator to introduce measures designed to increase transparency and accountability across the ecosystem. In this scenario, unlike co-regulatory arrangements such as those in place for the BCAP Code, a statutory regulator would use code-writing powers to design the Code, and carry out both regulation and enforcement. This approach may be most appropriate for illegal harms. We may consider it appropriate for the statutory regulator to utilise existing Codes as the basis for regulation, or empower the regulator to design a new set of codes across the actors they are responsible for regulating.

This is likely to be the most effective approach for increasing accountability in addressing illegal harms like fraudulent advertising (and as will be aligned with the new measures in the forthcoming Online Safety Bill), as criminal enforcement powers would likely be necessary for some of the measures required to address such activity. This approach could also have the most potential for increasing transparency through introducing requirements on actors to share information, as well as the potential for audit. This could lead to better information and oversight surrounding advertising harms, which is the first step in taking effective action.

Having a full statutory regulator across all legal and illegal harms in scope, as opposed to statutory backstops operating in discrete areas for certain harms, may create benefits, such as a more unified regulatory framework with stronger powers of enforcement across the board. Evidence from stakeholders, including responses to our call for evidence, suggests that a more cohesive regulatory framework, with stronger powers of enforcement, could be beneficial in tackling a wide range of harms relating to online advertising.

We also recognise that there are challenges and potential costs associated with this option, which will be important to consider. A full statutory approach for online advertising, applicable to all parties including advertisers, is likely to prove incompatible with the current arrangement of regulatory bodies (e.g. the ASA, CAP, Asbof) which is long standing, and has buy-in from across the industry. The withdrawal of these bodies from the regulation of online advertising would risk the loss of expertise and experience, both of which would take time and cost to build in any new regulatory body.

Introducing a statutory regulator that would put in place new measures and use statutory enforcement powers would likely entail higher transition costs to businesses, such as those of familiarisation and compliance, including the time taken to gain an understanding of the regulations and train staff. Care would be needed to ensure that any new rules continue to support innovation, and that they, as well as the regulatory framework in place to support them, are flexible enough to remain relevant, nimble and adaptable as technology develops.

Supporting innovation and adaptability of Codes are two positive features of the current, self-regulatory system. Additionally, regulated sectors may have higher barriers to entry, so detailed consideration of the companies in scope and development of provisions for smaller companies will be important to encourage innovation and competition. We also recognise that this will be a significant shift away from the current self-regulatory model.

Level of regulatory oversight Description Regulatory instruments available to ensure compliance
Option 3: Full statutory approach A statutory regulator would have powers to ensure appropriate measures are put in place and enforced so that all actors in the supply chain address both legal and illegal online advertising harms. The statutory regulator would be empowered to take appropriate action. The regulatory powers available could include:

● Rule making powers

●Sanctions
-Fining powers
-Block and bans
-Senior person accountability
-Pre-vetting requirements for more high-risk advertising / repeat offender advertisers

●Information and Investigatory powers
-Powers to request data
-Powers to require reporting from regulated entities
-Power to audit regulated entities
-Publication of reports summarising findings
-Proactive investigations on areas of concern

6.1.3 Measures which could build on the current codes for advertisers

Alongside consideration of the options for the overall regulatory model set out above, we have assessed the scope for potential additional measures that could be applied to advertisers. These are intended to build on the responsibilities already placed on advertisers in the CAP Code and could be integrated into some of the aforementioned options. Some measures could be integrated into any of the above options, while others may require specific levels of oversight to be effective and may not be available in all of the three options outlined above.

The main aim of the CAP Code as currently operated is to regulate the behaviour of legitimate actors who wish to advertise their products and services to the public or other businesses. A key underlying premise to this is that there is a legitimate right on behalf of the advertiser to advertise their products and services, but that due to the nature of the products or services in question, it is necessary to have rules in place to prevent harm. This includes general rules such as the approach to disclosure of advertising, or rules around misleading advertising. The CAP Code also has sections that deal with higher risk products such as alcohol, medicines, weight loss, and gambling.

The following are a range of measures that could be introduced for advertisers over and above the rules already set out in the CAP Code. We will consider how any of the measures for advertisers can be supported by ad agencies. In discussing the impact of such measures, we recognise that any new measures that go beyond the current CAP Code will be subject to familiarisation costs.

Advertisers

Measure Assessment of benefit (with regards to addressing harms) Assessment of impact (on businesses)
Transparency measures    
Record keeping Requiring records to be kept in a standardised format to enable audit by regulator if suspected to be in breach of codes. This could include records on campaign objectives, KPIs, and targeting instructions, as well as campaign delivery reports from suppliers. Such a measure would look to have an impact on harms related to the targeting of vulnerable people. Some advertisers (likely smaller companies with limited capacity) may not have advanced advertising strategies / objectives / KPIs. Any record keeping expectations would need to be proportionate to the size and scale of the organisations involved.
Accountability measures    
High-risk advertising - self-declaration (to enable closer scrutiny from supply chain) This measure is intended to support those advertisers who may risk breaching codes related to legal but harmful advertising with their compliance. The onus on advertisers here would be to self declare as interested in placing ‘high-risk’ ads and to comply with increased scrutiny.

At the point of advertiser verification, platforms and/or intermediaries could request disclosure surrounding the categories of ads the advertiser wishes to place (e.g. those related to alcohol, gambling, medicine, weight loss). Additional monitoring of advertising could therefore apply in these instances. This could be a means of creating closer monitoring of legal but harmful harms without creating an unrealistic expectation to pre-vet all ads by all advertisers.
If coupled with a requirement for advertiser verification, this additional disclosure may not cause significant further costs. However, closer monitoring or further verification of the advertising placed by those advertisers whose ads fall into categories of interest would incur costs.
Pre-vetting for serious and repeat offenders - by supply chain (human or computer review) This measure would be dependent intermediaries and/or platforms already having advertiser verification and identity verification standards in place. Those advertisers then identified as repeat offenders for the breach of codes could be required to submit adverts for pre-vetting as part of a process of enacting more scrutiny across the supply chain in instances where breaches of the code may be more likely. These measures would help ensure breaches from both legitimate and illegitimate advertisers are reduced This could carry costs for a select number of advertisers who are repeatedly non-compliant, as well as for organisations across the wider supply chain to implement review systems. Such an approach may also be difficult to administer given the nature, scale and speed of programmatic advertising.
Measure to demonstrate care surrounding high-risk advertising and targeting (e.g. to avoid vulnerable groups) Such a measure, relating to the audience targeting of some online ads (subject to age-targeting restrictions in the CAP Code), is being developed by the ASA, This could be extended to cover other higher risk products or creative content, when considering where they are targeted, in order to avoid ‘high-risk’ advertising being served to vulnerable groups. Costs to business could include developing technological approaches to effectively target high-risk advertising content away from vulnerable groups / children.

6.1.4 Designing new measures for other actors in the supply chain

Achieving better transparency and share of accountability across the system will require input from each of the actors in the supply chain (as described in chapter 2). This will need to take into account the range of activities performed by different actors so that we achieve a coherent overall regulatory framework. The efficacy of the system as a whole will depend on the activities we ask different actors to undertake, combined with an appropriate level of oversight that effectively binds those organisations into the system.

To ensure a coherent and holistic regulatory framework, we are considering new rules for intermediaries, platforms and publishers which will sit alongside those already established for advertisers. We set out in this section the potential measures that could be applied to bring appropriate levels of accountability and transparency to the wider supply chain. These measures could be overseen by some or all of the three options for levels of regulatory oversight set out above.

We are exploring the possibilities of a systems and processes approach for these actors, which would complement current responsibilities on advertisers, creating clear roles and responsibilities across the online advertising supply chain. These processes would - for platforms and intermediaries - address the lack of incentives to share data, check who is advertising, and monitor for problematic content. They should therefore improve transparency in the system. This would help to identify bad actors and allow legitimate advertisers to have better oversight over where their money is being spent and to whom their content is ultimately served. This approach should also allow for flexibility within the regulatory framework so that it does not become quickly out-dated and unable to keep pace with the ever-evolving online ecosystem.

Intermediaries

Although intermediaries (and online platforms) are not consistently held to account under the existing regulatory framework, there are a number of industry initiatives that are establishing standards in this area. While there are industry standards in place, there is usually no requirement to opt in and no independent regulation ensuring compliance, which means the incentive to adopt the standards is limited to the businesses in question seeing a commercial advantage or social responsibility in doing so. These standards are also not routinely geared towards reducing consumer harms, but rather improving intermediaries’ business behaviours (as predominantly business-to-business companies), though some will have positive consumer spillovers.

The ASA’s development of the OPNS will create explicit obligations on intermediaries to play a role in ensuring that the CAP Code in place for advertisers is being effectively applied. This may go some way towards addressing transparency and accountability issues in the system. However, given the business-facing aspect of intermediaries, they are not very visible, nor widely known by consumers and the public, and so reputational sanctions may be less effective here.

The landscape of intermediaries is large and complex. We will ensure that any reforms to the current regulatory framework align with the UK’s e-commerce intermediary liability regime, which limits the liability of online intermediaries for illegal, third-party content stored on their services at the request of third parties. Intermediaries that host third party content can only be held liable for that content if they have knowledge of it, or, upon obtaining knowledge of it, fail to remove it expeditiously. These liability rules strike the right balance between supporting the innovation and growth of the online advertising industry, and ensuring that companies address infringing content on their services effectively. These rules are without prejudice and complementary to intermediaries’ existing duties and responsibilities under other legal regimes (particularly in relation to their own practices under consumer protection law).

In considering the proportionality of any measures introduced, we propose there should be a focus on systems and processes that would have the highest impact. The potential measures are listed in the table below.

In the discussion of market dynamics in chapter 2, we specified the main actors on both the demand and supply sides of the intermediary supply chain and mentioned some of the additional market participants involved in managing the data, targeting practices and analytics in online advertising. These actors included advertiser ad servers, demand-side platforms (DSPs), supply-side platforms (SSPs), and publisher ad servers. When describing a measure below as something that could be applicable to ‘all intermediaries’, we take an inclusive view of the term intermediaries and refer to those main actors on both the demand and supply side and all those ad tech actors for whom such measures could reasonably apply. In order to ensure regulatory measures apply in a proportionate manner, a possible alternative approach would be to apply some or all of these measures only to systemic or substantial firms operating in the UK market.

Intermediaries

Type of Intermediary (e.g. SSP/DSP) Measure Assessment of benefit (with regards to addressing harm) Assessment of impact (on businesses)
Transparency measures      
All ad tech intermediaries Record keeping Standardised record keeping to enable audit by a regulator would bring transparency across intermediaries and allow regulators to:

● Investigate cases of harmful advertising. Relevant records might include transaction data: volumes of impressions, dates, with consistent identification of advertisers, publishers and ad creative, and held identify cases where inaccurate audiences have been incorrectly measured or exaggerated

● Understand harmful advertising threats that intermediaries prevent. Relevant records might include the number of bad ads and advertisers prevented from advertising, by category, and volume.

● Identify the habits of previous bad actors
Likely to have some, limited impact on time and resource, depending on precise standardisation of records.
DSPs and SSPs Identity verification standards and information sharing on serious or repeat offenders to enable action to be taken by others in the supply chain and law enforcement This measure could build on, standardise and/or put impetus behind the range of initiatives used by intermediaries that use publicly accessible files or ledgers to verify key players within the supply chain. A DSP may implement standards to prevent malvertising (e.g. Buyers.json) and an SSP may implement standards such as Sellers.json to tell buyers which supply they are authorised to sell. These publicly available files increase transparency by allowing buyers to discover who the entities are that are either direct sellers of or intermediaries in the selling of digital advertising (e.g. through sellers.json) and accountability by allowing for quick identification of threat actors when attacks occur (e.g. through buyers.json). Familiarisation and adherence costs would apply here, depending on how far the measure builds on existing initiatives. The impact on businesses would be dependent on the level of regulatory oversight and the size of the business.

In the implementation of requirements for conducting customer due diligence two costs arise. Firstly, setup costs for systems and processes for verification of advertisers, then the cost to DSPs and SSPs to conduct the verification.
Accountability measures      
All ad tech intermediaries which deal directly with advertiser clients Minimum standards for advertiser identity verification Expectation to introduce/make more uniform/enhance in certain instances current advertiser identity verification to meet minimum standards, as set by the regulator. This would help to identify bad actors in the system before content which may be harmful is published through open display. Could require intermediaries to instate or in some instances improve existing systems and processes to meet the minimum standards, which may incur some implementation costs.
All ad tech intermediaries which deal directly with advertiser clients High-risk advertising - self-declaration (to enable closer scrutiny from supply chain) At the point of advertiser verification, intermediaries could request disclosure surrounding the categories of ads the advertiser wishes to place (e.g. those related to alcohol, gambling, medicine, weight loss). This could be a means of creating closer, additional monitoring of legal but harmful harms, without creating an unrealistic expectation to pre-vet all ads by all advertisers. If coupled with a requirement for advertiser verification, this additional disclosure would not cause significant further costs. However, closer monitoring or further verification of the advertising placed by those advertisers whose ads fall into categories of interest would incur costs.
All ad tech intermediaries which deal directly with advertiser clients Pre-vetting for serious and repeat offenders (human or computer review) This measure would be dependent on advertiser verification already being in place. Intermediaries could require repeat offenders to submit adverts for pre-vetting as part of a process of re-gaining access to placing adverts without pre-vetting following repeat breaches of the rules. These measures would help ensure breaches from both legitimate and illegitimate advertisers are reduced Could incur significant costs on intermediaries to implement adequate systems and processes, depending on the threshold for ‘serious and repeat offenders’ and therefore the number of advertisers who fall in this bracket.
All relevant ad tech intermediaries Measure to demonstrate care surrounding high-risk advertising and targeting (e.g. avoid vulnerable groups) Such a measure, relating to the audience targeting of some online ads (subject to age-targeting restrictions in the CAP Code), is being developed by the ASA. This could be extended to cover other higher risk products and services, and for the measure to also be placed on intermediaries given their key role in targeting within the supply chain. Costs to business could include developing technological approaches to effectively target high-risk advertising content away from vulnerable groups / children.
All relevant ad teach intermediaries Minimum standard Advertising acceptance policy The regulator could set minimum standards for intermediaries to uphold stringent advertising acceptance policies. These would involve review/monitoring requirements to scan ad creative and landing pages prior to and during campaigns in areas of high-risk advert. Requiring that intermediaries publicly disclose and uphold stringent advert acceptance policies could help in creating a higher bar for entry into the online advertising supply chain and reduce the number of harmful or illegal ads. Costs of implementation may apply here and DSPs may refuse some business due to stricter ad acceptance policy.
SSPs Publisher on-boarding policy SSPs have it within their control to define their own publisher on-boarding policy. Requiring that SSPs publicly disclose and uphold stringent publisher on-boarding policies could help in reducing brand safety concerns and advertiser harms. Costs of implementation may apply here and SSPs may refuse some business due to stricter publisher on-boarding policy.

Platforms

Platforms tend to set their own standards and rules when it comes to verifying advertisers, using data, and removing inappropriate or harmful content, alongside being required to meet obligations under existing consumer law. This means that there is not a standardised approach across the various platforms and there are also limited enforcement requirements to ensure compliance. Due to the low barriers to entry and ease with which any business is able to advertise through platforms, we anticipate that it is easier for bad actors to enter the system through owned and operated platforms, rather than the open display market. The open display market typically requires interaction with numerous organisations and therefore likely employment of a media agency self-regulated by the ASA.

Since 1 November 2020, Ofcom has been regulating video sharing platforms (VSPs) with the full regulatory guidance for the regime being published in October 2021. There are (at the point of publication) 19 UK-established VSP providers notified to Ofcom under the VSP framework, including TikTok, Snap, Vimeo, OnlyFans and a number of sites hosting adult content. The VSP regulations require providers to take appropriate measures to protect users from harmful content and restrict under-18s’ access to unsuitable material. It also contains certain advertising requirements which VSPs must adhere to.

The current regulation of VSPs shares broadly similar objectives as the upcoming Online Safety legislation, with both regimes focusing on regulating systems and processes. It is therefore the intention of the government to repeal the VSP regime once the Online Safety framework is in force. This will leave a gap in relation to advertising in paid-for space online which is not covered by this regime. The VSP regime is providing the government with a solid foundation to inform and develop any future advertising requirements on platforms. Though unlike the VSP regime, any new advertising requirements will apply to all platforms operating in the UK and will not be limited to only platforms which meet the legislative definition of a ‘‘VSP’’ and ‘‘UK established’’.

The Online Safety Bill will place obligations on platforms to address fraud, where it is facilitated through user-generated content or search results. For Category 1 and 2A services (the largest user-to-user and search services and the services with the highest reach) there will be a new standalone duty to put in place proportionate systems and processes to minimise the publication or hosting of any fraudulent advertising on their service. However, the spectrum of other advertising harms are not in scope of the Bill, as can be seen from the taxonomy of harms in section 3.3.

In operating a closed supply chain, platforms have a relationship both with advertisers and with consumers. As a result, it is appropriate to consider whether the range of measures to improve transparency and accountability that apply to intermediaries and publishers should also apply to platforms in relation to advertising. There are a range of possible measures we have identified specifically for platforms, such as annual reporting on complaints and action taken, and tools for advertisers to implement brand safety.

Given the market position, it may also be appropriate to consider going further by imposing principles or a duty of care, placing an obligation on businesses to show how they are actively minimising harms for their users, particularly the vulnerable. An alternative approach, adopted by the FCA in relation to financial services firms, is where the regulator sets out principles, as well as detailed rules, that the regulated parties follow.

Platforms

Measure Assessment of benefit (with regards to addressing harm) Assessment of impact (on business)
Transparency measures    
Record keeping Standardised record keeping to enable audit by a regulator would provide greater transparency across platforms and allow regulators to:

● Investigate cases of harmful advertising

● Relevant records might include transaction data (volumes of impressions, dates) with consistent identification of advertisers and ad creative, and held identify cases where inaccurate audiences have been incorrectly measured or exaggerated

● Understand harmful advertising threats that platforms prevent. Relevant records might include the number of bad ads and advertisers prevented from advertising, by category, and volume.

● Identify the habits of previous bad actors
Likely to have some, limited impact on time and resource, depending on precise standardisation of records.
Public libraries/archives of high-risk content categories This measure would be dependent on the measure for advertisers that would require disclosure at the point of advertiser verification for high-risk categories of ads. Requirements could then apply for either advertisers or platforms to upload high-risk ads to a public archive (similar to what some platforms do surrounding political advertising, and in the case of Facebook, for all adverts). There would likely be resource costs and limited costs of compliance for those players held to account under this measure
Annual reporting on complaints and action taken Requirement for platforms to publish reports annually of the issues raised and how they deal with them. This data would help the regulator track and assess the scale of the problem as well as placing impetus on platforms to take action in response. Costs related to resourcing and implementing adequate systems and processes to complete reporting on an annual basis.
Accountability measures    
Minimum standards for advertiser identity verification In certain circumstances, expectation to introduce/make more uniform/enhance current advertiser identity verification to meet minimum standards, as set by the regulator. This would help to identify bad actors in the system before content is served to internet users, preventing them carrying out illegal and fraudulent activities at source Could require platforms to instate or in some instances improve existing systems and processes to meet the minimum standards, which may incur some implementation costs.
High-risk advertising - self-declaration (to enable closer scrutiny from supply chain) At the point of advertiser verification, platforms could request disclosure surrounding the categories of ads the advertiser wishes to place (e.g. those related to alcohol, gambling, medicine, weight loss). Additional monitoring of advertising could therefore apply in these instances. This could be a means of creating closer monitoring of legal but harmful harms without creating an unrealistic expectation to pre-vet all ads by all advertisers. This approach would require the advertiser to identify themselves. If coupled with a requirement for advertiser verification, this additional disclosure would not cause significant further costs. However, closer monitoring or further verification of the advertising placed by those advertisers whose ads fall into categories of interest would incur costs.
Pre-vetting for serious and repeat offenders (human or computer review) This measure would be dependent on advertiser verification already being in place. Platforms could require repeat offenders to submit adverts for pre-vetting as part of a process of re-gaining access to placing adverts without pre-vetting following repeat breaches of the code. These measures would help ensure breaches from both legitimate and illegitimate advertisers are reduced. Could incur significant costs on platforms to implement adequate systems and processes, depending on the threshold for ‘serious and repeat offenders’ and therefore the number of advertisers who fall in this bracket.
Measure to demonstrate care surrounding high-risk advertising and targeting (e.g. avoid vulnerable groups) Such a measure, relating to the audience targeting of some online ads (subject to age-targeting restrictions in the CAP Code), is being developed by the ASA. This could be extended to cover other higher risk products in order to protect vulnerable groups from harmful content. Costs to business could include developing technological approaches to effectively target high-risk advertising content away from vulnerable groups / children
Minimum standards for proactive review of content and tests for threats Ad review/monitoring requirements could be placed on platforms to scan ad creative and landing pages prior to and during campaigns in areas of high-risk. Costs of implementation may apply here and platforms may refuse some business due to stricter ad acceptance policy.
Principles or duty of care to minimise harm This measure could involve ensuring the appropriate checks and balances are in place to minimise harms for internet users. This would likely involve reviewing systems and processes rather than specifying actions to be taken. Costs of reviewing current systems and processes.
Advertiser acceptance policies (and publisher acceptance policies where applicable). Clearly defined publisher and advertiser on-boarding policies. Requiring that platforms that publicly disclose and uphold stringent ad acceptance policies could help in creating a higher bar for entry into the online advertising supply chain and reduce the number of harmful or illegal ads.

Requiring that platforms that serve ads to other publishers publicly disclose and uphold stringent publisher on-boarding policies could help in reducing brand safety concerns and advertiser harms.
Costs of implementation may apply here as platforms may refuse some business due to stricter acceptance policies.
Tools for advertisers to implement brand safety A platform may provide tools so advertisers can implement brand safety policy. Such a measure would aid in reducing advertiser harms and reduce the placement of advertising next to illegal, inappropriate or harmful content which may damage the brand’s image. The adoption and implementation of such tools would carry costs. The impact on business here would be dependent on the level of regulatory oversight and the size of the business.
     
Customer empowerment tools - minimum standards (complaints and/or filtering of advertising content) This could include standardising and improving consumer facing tools so that internet users can easily flag problem ads to regulators and can specify ad types they don’t want to see on a particular platform/publisher Costs likely to be minimal for platforms in this instance in terms of implementation. However, providing more customer control may have knock on costs elsewhere in the supply chain.

Publishers

Publishers play a role in hosting and serving adverts to consumers in order to monetise their content. However, there are some key differences between platforms and publishers which suggest that we may want to take a different approach to building on current regulation for these two categories of actors. Online publishers tend to operate with advertisers through intermediaries, whereas platforms interact with advertisers through an integrated sales function, rather than through separate intermediary organisations. As a result, open display publishers are driven by different incentives than platforms.

For press publishers, the government is strongly aware of sustainability concerns and their importance to society and democracy in considering proportionate levels of regulation. Increasing obligations on the press in relation to a core revenue stream could have cost implications for a sector facing significant challenges to its long-term sustainability. On the other hand, additional transparency across the online advertising ecosystem could support press publishers to have clearer sight over the supply chain and ultimately bolster their ability to sell their inventory to advertisers who seek high brand safety standards. Higher trust in online advertising should ensure a more sustainable market to the benefit of publishers.

We are considering some of the same measures as platforms in terms of strengthening regulatory powers to adapt/develop standards, which could include minimum standards for advertiser identity verification, along with the same range of options regarding the degree of intervention, i.e. voluntary/mandatory code or duties of care. However, we are exploring whether these are proportionate to the size, scale and sustainability of publishers.

Publishers

Measure Assessment of benefit (with regards to addressing harm) Assessment of impact (on business)
Transparency measures    
Record keeping Standardised record keeping of adverts served for regulators to audit if necessary would increase transparency across the advertising served by publishers and allow regulators to more accurately assess the threat level of different harms. It would also allow regulators to identify the historic offences of bad actors. Likely to have some impact on time and resource, depending on precise standardisation of records. Care would need to be taken not to overburden publishers, given existing pressures and large number of smaller publishers who may not have adequate capability.
Accountability measures    
Minimum standards for advertiser identity verification This would help to identify bad actors in the system before content is served to internet users.This would only apply in the case that a publisher sells advertising directly to an advertiser without the involvement of an intermediary. Could require publishers to instate or in some instances improve existing systems and processes to meet the minimum standards, which may incur some implementation costs. We recognise and will take into consideration that the selling of advertising directly is less common and forms of identity verification may already take place in such deals.
Customer empowerment tools - minimum standards (complaints and/or filtering of advertising content) This could include standardising and improving consumer facing tools so that internet users can easily flag problem ads to regulators and can specify ad types they don’t want to see on a particular platform/publisher Costs likely to be minimal for publishers in this instance in terms of implementation. However, providing more customer control may have knock on costs elsewhere in the supply chain. As above, the pressures and concerns unique to some publishers, and the capacity that smaller publishers have, would be taken into account as part of any decisions to progress such a proposal.

6.1.5 Applying the options for regulatory oversight to the proposed measures for each actor

Advertisers

Some of the measures proposed for advertisers, for example to demonstrate care surrounding high-risk advertising and targeting, mirror those already being brought in by the self-regulator surrounding specific high-risk categories of advertising such as gambling and so may work well as part of option 1, a self-regulatory framework. Enforcing such a measure using the sanctions available to the existing self-regulatory framework could further formally establish expectations surrounding high-risk advertising without being overly prescriptive.

The measure relating to pre-vetting for serious and repeat offenders may necessitate option 2, in which certain aspects of a code signed off by a statutory regulator, specifically those relating to illegal actors/harms and repeat offenders, could be backstopped by further statutory powers of enforcement more capable of addressing such harms/actors. However, other measures such as obligations surrounding record keeping and self declaration in relation to high-risk advertising would likely necessitate a full statutory approach, under which codes relating to all harms, legal and illegal, would be signed off and enforced by a statutory regulator. Record keeping and self-declaration do not necessarily relate to illegal harms but would likely necessitate stronger powers of enforcement than those that currently exist under the self-regulatory framework to ensure compliance and effectiveness across all advertisers.

Intermediaries

Some of the proposed measures to increase transparency for intermediaries, such as the adoption of standards (for identity verification), are already present across discrete areas of industry. As a result, codes that further formalise the expectation to adopt these standards could be brought in as part of a self-regulatory framework. Similarly, accountability standards that involve demonstrating care surrounding high-risk advertising and targeting, minimum standards for advertiser identity verification, and ad acceptance and publisher on-boarding policies, could be proposed as part of a self-regulatory framework.

Pre-vetting for serious and repeat offenders, and minimum standards for proactive review/tests for threats, are likely to be the exception, as these measures would require stronger powers of enforcement than available to the self-regulators. As such, they would need at least a statutory backstop for certain codes related to certain harms/actors, or full statutory regulation of all codes. While the aforementioned standards could be introduced under a self-regulatory framework, it is worth mentioning that in order to ensure these standards that relate to a range of legal and illegal harms and actors are effectively enforced, a full statutory approach that would make available statutory powers of enforcement for all codes introduced/signed off would likely be necessary.

Platforms

For those platforms that also operate online display intermediary businesses of meaningful scale, all measures that apply to intermediaries would apply in the first instance. As the level of regulatory oversight needed for measures already proposed for advertisers and intermediaries are discussed above, this section will focus on the levels of regulatory oversight needed for those measures unique to platforms. Some of these measures build on standards that some platforms already have in place, such as tools for advertisers to implement brand safety, or customer empowerment tools.

Similarly, reporting on complaints and action taken, and principles of care to minimise harms, may already be existing practice for some platforms. Therefore these measures could be further incentivised by formalising them as expectations under option 1, the self-regulatory framework. However, in order to make principles into duties, require the reporting to be standardised and make these practices uniform across firms in scope, a full statutory regulator that would enforce all codes introduced would be required.

In addition, more novel requirements such as the measure to keep public libraries/archives for high-risk content categories would likely require such a full statutory approach. Similar to intermediaries, pre-vetting for serious and repeat offenders, and minimum standards for proactive review/tests for threats, are likely to be the exception as these measures would require stronger powers of enforcement than available to the self-regulator and so would necessitate at least a statutory backstop for certain codes related to certain harms/actors, or full statutory regulation of all codes.

Publishers

Measures suggested for publishers include record keeping, minimum standards for proactive review of content and threats, and customer empowerment tools. Due to the different concerns that relate to some publishers (e.g. freedom and sustainability of the press), a self-regulatory framework as outlined in option 1 would likely be most appropriate. Backstopping certain codes related to publishers or applying full statutory regulation to these actors would likely not be appropriate.

Video-on-demand (VOD) services

As outlined in section 4.1.1, the current regulatory framework considers regulated VODs in the same way as online advertising, subject to the CAP Code.

We would welcome views from respondents on the approach of VOD advertising regulation going forward, and in particular if and how any new requirements for online advertising should be applied to them. This includes how any requirements for VOD are aligned to requirements that may be applied to platforms and publishers online, including Video Sharing Platforms (VSPs).

Whilst advertising for VoD is increasingly traded using automated and increasingly personalised approaches, broadcasters (who are responsible for some but not all VOD) generally sell their VOD advertising inventory directly to advertisers and agencies, with more limited intermediation than is the case for other publishers, like news websites.

6.1.6 Developing an overarching regulatory framework for the online advertising ecosystem

We have outlined above the levels of regulatory intervention that could be introduced for all actors, demonstrated what different regulatory approaches could look like for each actor, and provided some analysis on the likely effectiveness and proportionality of these approaches.

We would like respondents to consider if they agree with our assessment of the level of regulatory oversight likely necessary for each measure. Where we have demonstrated that measures could work under more than one of the options for regulatory oversight, we would like respondents to specify which option they think would be a more appropriate approach for the actors and measures in question.

We present this to invite views on the most appropriate approach to take in our review of the regulatory framework for online advertising and our consideration of any reforms. It would be particularly helpful to gain views on what level of regulatory oversight is likely required to bind the actors into the system, and what incentives respondents consider will be most successful in achieving engagement and compliance with the system. We are also mindful of competition concerns and creating a system that does not stifle innovation. With that in mind, we are keen to hear views about what will be proportionate for different actors, recognising that the operators in the online advertising ecosystem range in size and scale of operation.

Some organisations may be able to better accommodate new regulatory burdens than others, and we will need to consider carefully where any new obligations may be best placed given the varying levels of capability and capacity across the supply chain - and to avoid duplication of effort. For example, we might reasonably expect larger advertising-funded platforms which run ‘owned and operated’ systems to do significantly more than what we might expect of a smaller SSP or DSP with less capacity to act. We will also consider the cost to industry of implementing any additional regulatory requirements, so these can be compared against the objective of achieving significant overall benefits to consumers and businesses from a reduction in harms and increased trust in online advertising.

We are considering a range of tools and measures that platforms, advertisers and intermediaries could adopt in order to better protect consumers, improve the sustainability of, and ultimately increase trust in, online advertising. By giving the actors in the ecosystem greater responsibility to act, we need to ensure that we consider the most appropriate regulatory framework capable of enforcing these requirements, making sure that all parties are fulfilling the expectations demanded by the supply chain.

6.1.7 Funding for a new regulatory framework

We are conscious that these new measures and tools will require additional funding, as would the creation of any statutory regulator. In order for regulation to be effective, the resourcing needs to be sufficient. The government recognises that this is going to be vital in securing the sustainability of the advertising regulatory framework and is committed to exploring options which work for advertisers, organisations, businesses alike.

For option 2 and option 3, the broad principle would be that a statutory regulator should be funded by the companies it regulates. This approach would require the introduction of a statutory levy, with the costs of regulation applied proportionately to the companies within scope.

Following the conclusion of this consultation, we will consider the most appropriate way of funding the programme. We will work closely with the Advertising Standards Board of Finance (ASBOF) and regulators to understand the costs to industry.

6.2 Next steps

In this consultation we have considered the online advertising ecosystem, particularly the supply chain, the harms that can arise and the suitability of the current regulatory framework in addressing these harms now and in the future. It is our intention to build on previous work gathering evidence and insights on the key issues pervading the online advertising space, and seek views on how the government may respond. In recognising the innovation, commitment and collaborative efforts of many players in the industry in rising to the challenges presented, we have highlighted areas of good practice and identified issues we think industry has the capability to address.

However, in acknowledging stakeholder calls for more action we have outlined a range of measures the government could take in order to compel decisive action on key harms. Depending on how prescriptive stakeholders think the government should be, above are a range of options for reforming the regulatory framework. Our range of options varies from the current self-regulatory framework to more interventionist solutions that would involve introducing legislation and a statutory regulator for online advertising.

We want to ensure any action taken as a result of our consultation is proportionate to the problem at hand and recognises that there may be threats to this system in the future. It is our intention to draw on existing expertise and encourage and accelerate efforts that demonstrate capacity to build trust in online advertising, while recognising areas where we need to go further. We therefore ask that stakeholders provide evidence and argument for where and how they see harms being effectively addressed, but are also transparent and open in discussing where firmer measures could aid in combating some of the problems we have identified.

This government is committed to making the UK the safest place for consumers online and the best place to build a business. We believe that it is by working in concert with industry that we can achieve these aims and continue to support innovation in the sector, promoting a flourishing digital economy with online advertising at the heart of it. We look forward to analysing consultation responses and developing a policy which fosters an sustainable, transparent, and accountable online advertising that improves public trust in advertising, reduces harms for consumers, businesses and society as a whole.

Consultation question 13

To what extent do you agree that the current industry-led self-regulatory regime for online advertising, administered by the ASA, to be effective at addressing the range of harms we have identified in section 3.3?

a) Strongly agree
b) Somewhat agree
c) Neither agree nor disagree
d) Somewhat disagree
e) Strongly disagree

Please explain your answer.

Consultation question 14

Do you consider that the range of industry initiatives described in section 4.3 are effective in helping to address the range of harms set out in section 3.3?

a) Yes
b) No
c) Don’t know

Please explain your answer.

Consultation question 15A

Which of the following levels of regulatory oversight do you think is appropriate for advertisers?

a) Continued industry self-regulation with some backstopped areas (status quo)
b) Backstopped regulation for all or some higher risk areas of harm
c) Statutory regulation
d) Other (please specify)

Consultation question 15B

Which of the following levels of regulatory oversight do you think is appropriate for platforms?

a) Industry self-regulation
b) Backstopped regulation for all or some higher risk areas of harm
c) Statutory regulation
d) Other (please specify)

Consultation question 15C

Which of the following levels of regulatory oversight do you think is appropriate for intermediaries?

a) Industry self-regulation
b) Backstopped regulation for all or some higher risk areas of harm
c) Statutory regulation
d) Other (please specify)

Consultation question 15D

Which of the following levels of regulatory oversight do you think is appropriate for publishers?

a) Industry self-regulation
b) Backstopped regulation for all or some higher risk areas of harm
c) Statutory regulation
d) Other (please specify)

Consultation question 16

Following on from your answer to question 14, do you think a mix of different levels of regulatory oversight may be warranted for different actors and/or different types of harm?

a) Yes
b) No
c) Don’t know

Please explain your answer, including outlining your proposed approach.

Consultation question 17

What is your preferred option out of the three permutations described under option 2?

a) Permutation 1
b) Permutation 2
c) Permutation 3

Please explain your answer.

Consultation question 18

For each of the actors, which measures (set out in the tables in section 6.1.3 and section 6.1.4 do you support and why?

Please explain your answer.

Consultation question 19

Are there any measures that would help achieve the aims we set out, that we have not outlined in the consultation?

Complete list of consultation questions

Consultation questions

1. Do you agree with the categories of online advertising we have included in scope for the purposes of this consultation?

a) Yes
b) No
c) Don’t know

Do you think the scope should be expanded or reduced? Please explain.

2. Do you agree with the market categories of online advertising that we have identified in this consultation?

a) Yes
b) No
c) Don’t know

Do you think the scope should be expanded or reduced? Please explain.

3. Do you agree with the range of actors that we have included in the scope of this consultation?

a) Yes
b) No
c) Don’t know

Do you think the range should be expanded or reduced? Please explain.

4. Do you agree that we have captured the main market dynamics and described the main supply chains to consider?

a) Yes
b) No
c) Don’t know

Please explain your answer.

5. Do you agree that we have described the main recent technological developments in online advertising in section 2.2.2?

a) Yes
b) No
c) Don’t know

Please explain your answer.

6. Do you agree that our taxonomy of harms covers the main types of harm found in online advertising, both in terms of the categories of harm as well as the main actors impacted by those harms?

a) Yes
b) No
c) Don’t know

Please explain your answer, indicating any types of harm, or actors impacted by the harm that we have not captured, as well as any evidence to support your answer.

7. Do you agree that our above description of the harms faced by consumers or society cover the main harms that can be caused or exacerbated by the content of online advertising?

a) Yes
b) No
c) Don’t know

Please explain your answer, including any harms that are not covered in our description. This may include any evidence you can provide on the frequency and severity of the harms, trend data, and/or impacts on protected groups.

8. Do you agree that the above description of the harms faced by consumers or society cover the main harms that can be caused or exacerbated by the placement or targeting of online advertising?

a) Yes
b) No
c) Don’t know

Please explain your answer, including any harms that are not covered in our description. This may include any evidence you can provide on the frequency and severity of the harms, trend data, and/or impacts on protected groups.

9. Do you agree with our description of the range of industry harms that can be caused by online advertising?

a) Yes
b) No
c) Don’t know

Please explain your answer, including any harms that are not covered in our description. This may include any evidence you can provide on the frequency and severity of the harms, or trend data.

10. Do you agree that we have accurately captured the main industry initiatives, consumer tools and campaigns designed to improve transparency and accountability in online advertising?

a) Yes
b) No
c) Don’t know

Please explain your answer, including reference to any further industry initiatives, consumer tools or campaigns that we should be aware of.

11. Should advertising for VoD closer align to broadcasting standards or follow the same standards as those that apply to online?

a) Broadcasting
b) Online

Please explain your answer.

12. To what extent do you agree with our rationale for intervention, in particular that a lack of transparency and accountability in online advertising are the main drivers of harm found in online advertising content, placement, targeting, and industry harm?

a) Strongly agree
b) Somewhat agree
c) Neither agree nor disagree
d) Somewhat disagree
e) Strongly disagree

Please explain your answer.

13. To what extent do you agree that the current industry-led self-regulatory regime for online advertising, administered by the ASA, to be effective at addressing the range of harms we have identified in section 3.3?

a) Strongly agree
b) Somewhat agree
c) Neither agree nor disagree
d) Somewhat disagree
e) Strongly disagree

Please explain your answer.

14. Do you consider that the range of industry initiatives described in section 4.3 are effective in helping to address the range of harms set out in section 3.3?

a) Yes
b) No
c) Don’t know

Please explain your answer.

15A. Which of the following levels of regulatory oversight do you think is appropriate for advertisers?

a) Continued industry self-regulation with some backstopped areas (status quo)
b) Backstopped regulation for all or some higher risk areas of harm
c) Statutory regulation
d) Other (please specify)

15B. Which of the following levels of regulatory oversight do you think is appropriate for platforms?

a) Industry self-regulation
b) Backstopped regulation for all or some higher risk areas of harm
c) Statutory regulation
d) Other (please specify)

15C. Which of the following levels of regulatory oversight do you think is appropriate for intermediaries?

a) Industry self-regulation
b) Backstopped regulation for all or some higher risk areas of harm
c) Statutory regulation
d) Other (please specify)

15C. Which of the following levels of regulatory oversight do you think is appropriate for publishers?

a) Industry self-regulation
b) Backstopped regulation for all or some higher risk areas of harm
c) Statutory regulation
d) Other (please specify)

16. Following on from your answer to question 14, do you think a mix of different levels of regulatory oversight may be warranted for different actors and/or different types of harm?

a) Yes
b) No
c) Don’t know

Please explain your answer, including outlining your proposed approach.

17. What is your preferred option out of the three permutations described under option 2?

a) Permutation 1
b) Permutation 2
c) Permutation 3

Please explain your answer.

18. For each of the actors, which measures (set out in the tables in section 6.1.3 and section 6.1.4 do you support and why?

Please explain your answer.

19. Are there any measures that would help achieve the aims we set out, that we have not outlined in the consultation?

Impact assessment questions

1. Do you have any further evidence of harms experienced by consumers and/or advertisers?

Please can you include any source and provide information on the frequency and severity of harms, trend data, and/or impacts on protected groups.

2. Can you provide any evidence on the costs and effectiveness of measures your organisation has implemented to prevent the harms outlined?

a) Yes
b) No
c) Don’t know

Please explain your answer and provide relevant evidence.

3. Do you agree with our assumptions on the costs incurred under option 2 and under option 3 (e.g. number of businesses in scope, transition and ongoing costs)?

a) Yes
b) No
c) Don’t know

Please explain your answer and provide relevant evidence.

4. Do you agree with our assumptions on the benefits of the proposed measures (e.g. an annual baseline cost of fraud of £400m and a 5% annual increase in fraud incidents)?

a) Yes
b) No
c) Don’t know

Please explain your answer and provide relevant evidence.

5. Can you provide any further evidence to refine the assessment of costs and benefits under option 2 and 3?

a) Yes
b) No
c) Don’t know

Please explain your answer and provide relevant evidence.

6. Do you agree that the qualitative assessment of the likelihood of harm reduction taking place under option 2 and 3 is a fair assessment?

a) Yes
b) No
c) Don’t know

Please explain your answer.

7. Which mitigation measures should be considered to support small businesses and to ensure that they will not be affected disproportionately by the new regulatory measures?

Please explain your answer.

8. Can you provide details of any monitoring system already in place that records the adverts delivered to internet users?

How to respond to the consultation

We welcome your views. To help us analyse the responses, please use the online consultation system wherever possible.

The closing date for responses is 08/06/22.

If you have any difficulty submitting your response to the consultation through the survey, please email online-advertising-consultation@dcms.gov.uk.

Hard copy responses can be sent to:

Advertising team
Department for Digital, Culture, Media & Sport
4th Floor - area 4E/01
100 Parliament Street
London
SW1A 2BQ

About you

When you send your response, please select from the following list the term that most accurately describes the capacity in which you are responding to this consultation.

  1. Private individual
  2. Advertiser (including brands)
  3. Advertising creative agency
  4. Media agency
  5. Platform (including Video Sharing Platforms)
  6. Intermediary
  7. News publisher
  8. Other publisher
  9. Regulator
  10. Consumer group
  11. Civil society organisation / not for profit organisation
  12. Broadcaster
  13. VOD service provider
  14. Academic
  15. Health
  16. Other

Annex A: Categories of online advertising

Open display advertising

This is a type of online advertising which utilises text, images and URLs that link to the advertiser’s website; the user can click on the advert and be taken to the product page. The content of these ads can be static, animated, or audio-visual. An example of these would be banner ads or leaderboards. Open display advertising can appear across any website or app that sells its advertising inventory in this way, for example news publishers’ sites and ecommerce sites, etc.

Social media display advertising

Social media advertising is a form of online advertising which displays adverts to the target audience through social media platforms such as Snapchat, Instagram, Facebook etc. Social media distinguishes itself from other types of media as it can be highly targeted and is able to use technology to serve (or re-serve) adverts to niches of consumers in real time. Within the umbrella term of social media advertising, the advertiser can use native types of display advertising such as banner ads, widgets, takeovers or influencers in order to communicate their brand's messaging.

Example:

Example of social media display advertising

It is estimated that up to 80% of online advertising expenditure is generated by Meta, which owns a number of social media platforms such as Facebook and Instagram. YouTube has the second-highest share of display advertising and is owned by Google. Facebook and Google are a powerful duopoly in the advertising ecosystem, with the majority of supply chains involving their services as a facilitator for disseminating advertising content.

The Competition and Markets Authority’s (CMA’s) 2020 report into online platforms and digital advertising, which assessed competition in the ecosystem, concluded that regulation was necessary to address the harmful effects and behaviours caused by platform dominance in the online advertising market. The Government is considering these recommendations as part of its work on competition in digital markets. The CMA is also currently investigating Facebook’s use of ad data.

Online paid search advertising is a technique used by advertisers to display ads in search engine results. It means that the ads displayed are able to mirror the searched item. This type of supply chain is sold predominately by two leading search engines: Google and Bing. These services are typically bought through agencies or a ‘self-serve’ ad platform within ‘walled-garden’ business models; they host an end-to-end service which the consumer can interact with and all the data, payment and statistics can be found in one place. Search advertising is the largest category of digital advertising in the UK, with the CMA estimating total ad spend of £7.3 billion in 2019.

Example of paid-for search advertising

Classified advertising

Classified ads are typically associated with traditional media formats, such as being printed in a newspaper or magazine. They were typically small, notice-like ads which would be sorted by product categories. Over time, classified adverts have morphed into online versions of a classified listing. These give the advertiser a chance to post their product or service and are less costly than a larger display campaign, meaning they have low barriers to entry and can be used by a large number of people. Examples of this may be places such as Facebook Marketplace or Gumtree.

Example:

Example of classified advertising

Content marketing

Content marketing is used here to denote the overlapping terms of native advertising, sponsored content and influencer marketing. These advertising methods are often embedded into either user-generated or editorial content, and highlighted to consumers with signposting such as ‘sponsored by’, ‘presented by’ or ‘in partnership with’. Put simply, content marketing is paid-for advertising content which intentionally resembles that of the editorial content of the publisher or influencer.

This type of advertising is sometimes quite hard for consumers to spot and relies on the influencer, publisher or platform to declare that it is paid-for content. There is no single form of content marketing and it can be designed in myriad ways. The growing popularity of content marketing conveys a strategic shift as many brands consider more subtle ways of selling their products and services. These efforts involve creating content or promoting experiences that consumers enjoy, while simultaneously conveying a brand message.

With the rise of social media, brands have become increasingly focused on using ‘influencers’ (those with a large social media following) as key parts of their advertising campaigns. Influencer marketing often provides an online equivalent of ‘advertorial’ content whereby the user is able to view the content of an ad in the context of their feed / timeline. Globally, the influencer market is worth $13.8 billion with Instagram being the leading platform for influencers.

Influencers are often able to reach a big and targeted audience in a subtle, natural and effective way. However, as the influencer trend arose, a regulatory gap developed and consumers felt they were being misled by their trusted content providers. The ASA has since created toolkits and guidelines for influencers to ensure they are being transparent with their followers. Due to the types of breaches that are likely to occur with influencer marketing, they fall within the remit of ‘misleading advertising’ and the ASA is therefore backstopped by Trading Standards under the Consumer Protection from Unfair Trading Regulations 2008 (CPRs).

The scope of the forthcoming Online Safety Bill (OSB) includes advertising content which takes the form of organic, user-generated content (i.e. promotional content posted by a company on its social media page, or influencer content promoting products and services), as such content appears to users and is treated by services in the same way as any other user-generated content. Traditional forms of influencer marketing will therefore be in scope of the OSB. Companies in scope of the OSB will need to apply their systems and processes to protect users from harm to this type of content. This includes measures to tackle illegal content and to protect children. The largest companies will also need to ensure that their terms of service regarding legal but harmful content are consistently applied. Influencer marketing, where it is paid for (directly or in-kind), is also in scope of the Online Advertising Programme.

Annex B: Glossary of definitions

Actors in the online advertising ecosystem

  • Advertisers (brands)[footnote 30]
    • Individuals, businesses, organisations which direct the content of a message within an online advertisement, directly or indirectly, in order to influence choice, opinion, or behaviour. Typically advertisers work alongside media buying agencies or creative agencies to develop and shape their message in order to produce the intended outcome (e.g. greater engagement / sales).
  • Intermediaries (third party agents)[footnote 31]
    • Businesses and/or services which connect buyers and sellers (e.g through programmatic trading), facilitate transactions, and leverage data to provide buyers with targeting options for online advertising.
  • Consumers / general public
    • Consumers are considered to be those who currently consume media on online platforms, but we also consider those who currently do not consume media on online platforms but could consume online platform media in the future as potential consumers and the wider general public.
  • Online platforms[footnote 32]
    • Ad-funded platforms seek to attract consumers by offering their core services for free. To promote their ad services they combine the attention of their consumers with contextual or personal information/data they have collected to serve advertising. Such platforms include but are not limited to search engines and social media sites.
  • Regulators[footnote 33]
    • Organisations exercising a regulatory function and setting boundaries within the advertising ecosystem. Where a regulatory function refers to a function under any enactment of imposing requirements, restrictions or conditions, or setting standards or giving guidance, in relation to any activity; or which relates to the securing of compliance with, or the enforcement of, requirements, restrictions, conditions, standards or guidance which under or by virtue of any enactment relate to any activity.
  • Media owners[footnote 34]
    • Owner/s of online content or application/s within which advertising can be displayed.
  • Advertising agencies[footnote 35]
    • Establishments primarily engaged in preparing advertising (writing copy, artwork, graphics, and other creative work) and placing such advertising in periodicals, newspapers, radio and television, or other advertising media for clients on a contract or fee basis. Some advertising agencies are vertically integrated and have their own proprietary ad tech.
  • Ad tech[footnote 36]
    • Used to refer to all ad tech intermediaries, including DSPs, SSPs and ad servers.
  • Ad servers[footnote 37]
    • Publisher ad servers manage the publisher’s inventory and provide the decision logic underlying the final choice of which ad to serve. Advertiser ad servers are used by advertisers and media agencies to store the ads, deliver them to publishers, keep track of this activity and assess the impact of their campaigns by tracking conversions.
  • Publishers and platforms
    • Attract audiences and provide opportunities for advertising placement. NB: The Government acknowledges that the term ‘online platform’ encompasses a range of services including social media, video sharing platforms (VSPs), creative content outlets, marketplaces and search engines.

Digital advertising markets

  • Search advertising
    • Where advertisers pay online companies to link their company website to a specific search word or phrase so that it appears in relevant search engine results.
  • Display advertising
    • Where advertisers pay online companies to display advertising using a range of advertising content types shown within defined ad units on web pages or mobile apps.
  • Native advertising
    • Sponsored listings that are camouflaged into the feed: this includes in-feed, search ads, recommendation widgets, and promoted listings. Native advertising is based on audience online habits and history and will show ads that may be relevant to them.
  • Social advertising
    • Social media platforms allow advertisers to target a specific audience, based on information the platform knows about its users (age, location, interests etc),allowing users to engage with the content by “liking” or commenting on paid posts. This includes influencer and sponsored posts.
  • Video ads
    • Video ads can show up as a commercial-esque ad before watching other videos on YouTube or Facebook, or they can be integrated as native or display ads.
  • Classified advertising
    • Classified advertising involves advertisers paying online companies to list specific products or services on a specialised website serving a particular vertical market. Payments for classified advertising will typically consist of listing fees or commissions.
  • Programmatic advertising
    • Programmatic advertising is the process of automatically buying and selling digital advertising space. Data is used to decide which ads to buy and how much to pay for them. Programmatic advertising provides an opportunity for businesses and other organisations to target their marketing messages to particular audiences on the basis of detailed consumer profiles.
  • Open display market
    • The open display market provides an alternative channel for buying display advertising inventory alongside the ‘owned-and-operated’ platforms. In contrast to the walled gardens of Google, Facebook and other platforms, in the open display market a wide variety of publishers, such as newspapers and other content providers, compete against each other in real time to sell inventory to advertisers.
  • Demand-side platforms (DSPs)[footnote 38]
    • Provide a platform that allows advertisers and media agencies to buy advertising inventory from many sources. DSPs bid on impressions based on the buyer’s objectives and on data about the final user.
  • Supply-side platforms (SSPs)[footnote 39]
    • Provide the technology to automate the sale of digital inventory. They allow real-time auctions by connecting to multiple DSPs, collecting bids from them and performing the function of exchanges. They can also facilitate more direct deals between publishers and advertisers.
  • Walled garden
    • A walled garden is a closed ecosystem, operated by people within the ecosystem, without the involvement of an outside organisation. It is an organisation that keeps its technology, information, and user data to itself.

Annex C: Illustrative examples

Below are some examples aimed at illustrating how harms manifest themselves and can be perpetuated in the case of two different but high-profile segments of online advertising. The first explores illegal harms with a spotlight on the way in which advertising can be used to perpetrate frauds. The second explores the way some legal adverts can be harmful through a spotlight on adverts that can promote harmful attitudes to body image. Below these we suggest how our policy options would apply to, and improve management of, harms.

For the purpose of this consultation, we have developed illustrative examples focusing on two of the core issues we have committed to tackling through the OAP; fraudulent advertising and harms associated with body image. Ads that help to perpetrate fraud have serious consequences to the consumer and ads that exploit vulnerabilities related to body image anxieties can likewise contribute to harm. Both are central to the narrative around the internet being harmful due to the financial and potential psychological damage that they can respectively inflict. Below we have outlined how the above options for consideration would apply to each of the examples.

Illustrative example 1: fraud (illegal harm)

Background

Fraud is the number one harm identified in the OAP Call for Evidence responses. It is also the most common crime type,[footnote 40] accounting for approximately 39% of estimated crime.[footnote 41] The scale of this crime is significant, with an estimated 5 million fraud offences in the year ending June 2021,[footnote 42] which equates to approximately 8.5% of adults being a victim of fraud. 53% of fraud incidents estimated by the Crime Survey for England & Wales were thought to be online-related.[footnote 43]

Advertisements for ‘goods’ were the most likely to be reported, totalling just under 35% of the dip sample. Advertisements appeared on a variety of websites with 49% appearing on social media sites (these included Facebook, Instagram, Twitter, Snapchat, and LinkedIn).[footnote 44]

Increasing evidence from the Financial Conduct Authority (FCA) also demonstrates that the risk of fraud through online advertising is increasing each year. FCA analysis of scam reports to the ScamSmart website suggests that the two most common ways which consumers come across potential scams are ‘online’ and ‘advert on social media’.

Government action (to date)

The Home Office is the lead government department responsible for tackling fraud. Within that overall framework and strategy, our aim as part of the Online Advertising Programme is to develop a coherent approach to online advertising that supports a healthy advertising ecosystem and protects consumers.

The government will also be releasing its Fraud Strategy this year. This will include the government working with industry, the intelligence services, law enforcement, and all partners to tackle fraud, including that which is enabled and perpetuated through online advertising.

Online Safety Bill (OSB)

The forthcoming OSB will tackle fraudulent paid-for advertising through a standalone duty. The duty will require high reach and high-risk services that are already in scope, as well as the largest search engines, to put in place proportionate systems and processes to prevent (or “minimise”, in the case of search services) individuals from encountering fraudulent advertising on their service. Additionally, all companies in scope of the OSB will need to take action to tackle fraud, where it is facilitated through user-generated content or via search results.

The OAP will build on this, while continuing to look at the whole ecosystem in order to capture all the players in the supply chain that have the power and capability to do more. In relation to fraud, the programme will focus on the role of intermediaries in onboarding criminal advertisers and facilitating the dissemination of fraudulent content through using the targeting tools available in the open display market. This will ensure that we close down vulnerabilities and add defences across the supply chain, leaving no space for criminals to profit.

Current regulatory system

Through its Scam Ad Alert system, the ASA seeks to play its part in disrupting ads that form part of fraudulent activity. The ASA is not a law enforcement body, and therefore it does not process such ads through its normal regulatory functions. It works closely with the Financial Conduct Authority (FCA) to ensure that the most appropriate body is addressing complaints and taking action.

The FCA sets rules that authorised financial services firms must comply with when communicating or approving financial promotions, such as the requirement that they must be fair, clear and not misleading.

There is currently no requirement on demand-side platforms (DPSs) or supply-side platforms (SSPs), intermediaries or platforms to verify advertisers or to run automated checks on adverts to spot potential fraud. When the ASA finds a case of fraudulent advertising, such as scam ads, it shares that information with the social media platforms in order to prevent these ads reappearing.

Recognising the significant threat of fraudulent advertising, large social media platforms are taking steps to ensure fraudulent advertising is not appearing on their platforms. For example, Google stated that in 2020, it took down 3.1 billion bad ads globally, including over 123 million ads for violating their financial services policies. More recently, for ads to UK users who are/appear to be seeking financial services, they have implemented a policy requiring advertisers to be verified by Google. Given that Google is the largest platform by far and has more than a 90% share of the £7.3 billion search advertising market in the UK, the potential exposure to this incredibly high volume of malicious content places UK consumers at risk. The scale of the unknown bad ads across the online advertising ecosystem remains a major hidden threat.

Applying the policy options

Option 1: Self-regulatory approach

A voluntary approach to fraud would provide industry with the space to find an innovative solution, relevant to each business model in the advertising industry. Industry has adopted various voluntary measures to proactively tackle the issue of fraud against consumers. Measures include verification, automated checks and reporting mechanisms. However, feedback from stakeholder conversations suggests that these measures are implemented unevenly across industry. Challenges include unclear and burdensome reporting processes, the lack of data-sharing between industry partners on known criminal advertisers, and the lack of checks on advertisers pre- and-post onboarding.

A voluntary approach may consist of an industry-led code of practice or charter setting out the threat from fraud, how industry is currently tackling the threat, and next steps. The code may be overseen by a relevant body, such as the ASA, TAG, or a new industry-led entity with a focus on protecting consumers. A voluntary approach may provide a vehicle for collaborative working, however, given the changing nature of the threat and the lack of a clear regulatory regime in place, it is unlikely that all businesses will step up. Government does not consider this approach to be sufficient to tackle the ever-growing issue of online advertising fraud.

Option 2: Statutory backstop

A hybrid or co-regulatory arrangement, similar to the High Fat Salt or Sugar (HFSS) regulatory model, will go a step further by providing the necessary tools needed to enforce proactive action, and to encourage compliance with voluntary commitments. An industry-led code, setting out how industry will proactively tackle fraud and protect consumers, in addition to a statutory backstop holding ad tech intermediaries and platforms to account, functions as a halfway house.

The ASA may not be the appropriate body to monitor the implementation of this code, given that the ASA is not a statutory body with the tools needed to deal with crime.

The co-regulatory model would give a regulator the power to fine businesses for not complying with the code/charter. It would provide clearer incentives for compliance. However, a co-regulatory approach further fragments the regulation of online advertising and it may not provide enough incentives for compliance, as it would be up to the businesses to decide whether they sign up or not. Uneven adoption would create “safe” and “unsafe” businesses and present a potentially confusing landscape for consumers.

Option 3: Full statutory approach

A full statutory approach, where industry is held to account by a statutory regulator with a set of specific duties to abide by, provides a structured framework for tackling fraud across the sector. Measures would be designed to increase accountability and transparency, closing the gap between the regulation of organic advertising content and paid-for content. Duties may include identifying advertising measurement and verification standards, requiring information sharing on serious or repeat offenders and minimum standards for technology for intermediaries and platforms. In addition, there may be broader duties to prevent the dissemination of fraudulent content, such as standardised formats for record keeping. This approach would create a baseline for the sector to adopt, and would provide a streamlined regulatory framework, with guidance for industry and consumers.

We are keen to hear from respondents on which of these options - voluntary, statutory backstop or full statutory - is considered the most appropriate form of regulation to combat fraudulent advertising online and whether these would be sufficient for emerging fraudulent advertising trends.

Background

Across government, we are committed to addressing some of the challenges around body image. Whilst there is limited quantitative evidence on how advertising affects body image, there is evidence demonstrating the correlation between the amount of time spent online scrolling through photo-focused apps (like Instagram, ​​which allows for the uploading and sharing of mainly non-advertising content, but also advertising content) and negative feelings associated with body image. There is a plethora of anecdotal evidence from consumers and charities about how advertising can contribute to body image concerns.

For the purpose of this consultation we define body image as how individuals feel about their appearances, including how they feel about their body (height, weight and shape).[footnote 45] The issue of body image is incredibly complex and there is no defining factor which correlates to individuals having poor body image. However, advertising may play a contributing role to causing harmful body image to individuals across the UK. There is evidence[footnote 46] which demonstrates a link between poor body image and poor mental health, which can develop into a range of reactions, from anxiety and self-disgust to depression. A YouGov poll found that 57% of people felt anxious because of their body image.

On the more harmful end of the spectrum, poor body image can lead to eating disorders and suicidal thoughts, with 13% of adults admitting to having experienced suicidal thoughts because of poor body image. Advertising faces criticism for perpetuating this issue, as members of the public are generally unable to connect with the images presented to them. A recent body image survey carried out by the Women and Equalities Committee found that 57% of adults reported ‘rarely’ or ‘never’ seeing themselves or people who look like them regularly reflected in images in media and advertising.

Current regulatory system

The ASA has a number of specific and general rules in the CAP Code that seek directly or indirectly to mitigate the potential harms arising from negative body image. The ASA also maintains guidance that encapsulates the lessons from relevant ASA rulings related to body image concerns (e.g. an ad that suggests it is desirable to be unhealthily thin, or that boys and girls can get acceptance from looking a certain way).

The guidance further mitigates the possibility of ads having a negative impact on body image and cautions that particular care should be taken if an ad is likely to appeal to young people. Ads which pressure the audience to conform to an idealised gender-stereotypical body shape or physical features are likely to breach the advertising rules.

When it receives breaches or complaints on advertising relating to body image, the ASA is able to review the advert’s content to make a judgement. It may be possible for the content of an advert (or its context) to be considered misleading, harmful or offensive. This can be further compounded when the issue is placed in front of an audience that has characteristics that make them vulnerable to the product or service being advertised or the creative content used.

The current advertising rules outlined in both the CAP and BCAP do not mandate diversity (including representatives of different body types). The decision on who appears in ads rests with advertisers, who are starting to make changes in this area to build trust amongst their consumers; research shows that 38% of consumers said they would be more likely to trust a brand that shows diversity.

A challenge in relation to body image is the role platforms might play in using their algorithms to unintentionally spread harmful content. For example, there are concerns around TikTok and how it is damaging the relationship between young women and their bodies. Research showed that videos were being repeatedly pushed to individuals who had expressed an interest in exercise and healthy eating habits. However, this may be reinforcing unhealthy habits or encouraging harmful behaviour - the intent behind the user’s search for these items is not something the platform can reasonably be expected to know.

In October 2021, the ASA launched a call for evidence which examines the standards outlined in the BCAP and CAP Codes in relation to body image. The aim of the consultation is to ensure the Codes, and the ASA’s interpretation of them, continue to have regard to the latest evidence, and are reflective of developments in body image.

Applying the policy options

As the policy of body image develops we will also be working closely with the ASA to ensure we supplement its work in updating its advertising Codes. This Government will aim to support any appropriate interventions the ASA wishes to make, provided they are based on evidence and are proportionate to the problem at hand. The ASA and the self- and co-regulatory systems are necessarily independent from the government.

Option 1: Self-regulatory approach

Under this approach, there would be no substantive change to the way adverts relating to body image are regulated. The Codes may be updated, subject to the conclusion for the call for evidence.

The OPNS may place some responsibility on intermediaries and platforms to streamline their systems and processes to protect consumers from harm t​​o the extent that it meets with its core objective. This is to explore extending the ASA’s online advertising regulatory framework, with the aim of putting on a formal footing, and bringing consistency to, the ways in which these companies work with the ASA to promote advertisers’ awareness of, and help secure advertisers’ compliance with, the CAP Code rules online.

For the actors in the supply chain this would mean that when there was a breach:

  • The advertiser would be subject to the ASA’s rulings on whether the ad complies with its content and placement standards. If they were found to have breached these Codes, the advertiser would be asked to remove, and amend, the advert and they may be ‘named and shamed’.
  • For the platforms and intermediaries, if a user complained about an ad which is harmful in the context of body image, they would be subject to review mechanisms. If they were found to have not taken reasonable steps to prevent harm, they would be subject to a proportionate sanction.

Option 2: Statutory backstop

The frontline regulation would be the same as option 1 (above), however there would be a statutory backstop who can enforce harsher sanctions in the case of non-compliance or a serious breach. This would be a new enforcement mechanism which would place greater responsibility, and accountability, onto the intermediaries and platforms.

Option 3: Full statutory approach

Under this option, a statutory regulator would hold and use code-writing powers, and carry out both regulation and enforcement. Intermediaries and platforms would also be held to account by a statutory regulator, which can enforce mechanisms which mandate transparency and hold the content host to full account.

For this consultation, we would be interested to hear views on which approach the industry feels would be proportionate to take in regulating this space. We want to ensure that the duties we are placing on parties within the supply chain are applicable across the board and do not unfairly disadvantage smaller firms who do not have the resource or capital.

  1. Derived from CMA market assessment, CMA, Online platforms and digital advertising, 2020. 

  2. See Annex 3 of the Government response to the consultation on introducing further advertising restrictions on TV and online for products high in fat, salt and sugar

  3. Clause 186 of the Draft Online Safety Bill

  4. Derived from NZ ASA definition, New Zealand Advertising Standards Authority, (2016) Updated Definition of Advertising and Advertisement

  5. Derived from Plum Report. Plum, (2020) Mapping online advertising issues, and industry and regulatory initiatives, pp. 17 & 48. 

  6. The advertising industry also includes further participants involved in the provision and management of data and in advertising analytics - such as data suppliers, data management platforms and measurement and verification providers - that we do not consider as being intermediaries for the purposes of this consultation. 

  7. Derived from CMA market assessment, CMA, Online platforms and digital advertising, 2020 

  8. CMA, Online platforms and digital advertising, 2020, pp.263 - 264 

  9. CMA, Online platforms and digital advertising, 2020, pp.263 - 264 

  10. CMA, Online platforms and digital advertising, 2020, p.264 

  11. Derived from CMA definition, CMA, Online platforms and digital advertising, 2020, p. 43 

  12. The government acknowledges that the term ‘online platform’ encompasses a range of services including social media, video sharing platforms (VSPs), creative content outlets, marketplaces and search engines. 

  13. Advertising Association, Powering Up UK Advertising Exports, n.d. 

  14. Ofcom, 2021, Online Nation report 2021 

  15. Competition and Markets Authority (2020), ‘Online Platforms and Digital Advertising’ Market Study - Appendix H: Default Positions in Search 

  16. CMA Mobile ecosystems market study interim report, 2022 

  17. Conscious Ad Network, n.d., Our Mission 

  18. Conscious Ad Network, 2021, Mis/Disinformation 

  19. ASA, n.d., 18 Alcohol 

  20. The ASA’s responsibility for day to day enforcement under the BCAP Code is established under the co-regulatory arrangements between Ofcom and ASA, under which Ofcom contracted out a certain number of its functions pursuant to the Contracting Out (Functions relating to Broadcast Advertising) and Specification of Relevant Functions Order 2004 (SI 1975/2004), made under the Deregulation and Contracting Out Act 1994. 

  21. ASA (2022), Legislation relevant to our functions 

  22. ASA, 2021, Innovate to regulate: policing ads online 

  23. The exception here is where Ofcom has retained (i.e. not designated) all statutory enforcement functions (e.g. for ODPS and VSP advertising). 

  24. For on-demand, both the advertiser and the service provider can be held responsible under the co-regulatory regime. 

  25. ASA, 2021, Harnessing new technology to tackle irresponsible gambling ads targeted at children

  26. ICO, 2018, Direct marketing guidance

  27. Plum (2020), ‘Mapping online advertising issues, and industry and regulatory initiatives’, figure 3 

  28. For example, the National Cyber Security Centre’s recent initiative: https://www.ncsc.gov.uk/blog-post/ncsc-for-startups-taking-on-malvertising 

  29. The duty will require high-reach and high-risk services that are already in scope of the Online Safety Bill, as well as the largest search engines, to put in place proportionate systems and processes to prevent (or “minimise”, in the case of search services) individuals from encountering fraudulent advertising on their service. Additionally, all platforms in scope of the OSB will need to take action to tackle fraud, where it is facilitated through user-generated content or via search results. 

  30. Derived from NZ ASA definition, New Zealand Advertising Standards Authority, 2016, Updated Definition of Advertising and Advertisement 

  31. Derived from Plum Report definition. Plum, 2020, Mapping online advertising issues, and industry and regulatory initiatives, pp. 17 & 48 

  32. Derived from CMA definition. CMA, Online platforms and digital advertising, 2020, p. 43 

  33. Derived from Legislative and Regulatory Reform Act 2006, p.12 

  34. Derived from Plum Report definition. Plum Consulting, 2019, Online advertising in the UK, p. 25 

  35. Derived from the description of the Standard Industrial Code 73.11, pp. 204-205 

  36. Derived from CMA usage. CMA, Online platforms and digital advertising, 2020 

  37. CMA, Online platforms and digital advertising, 2020, pp.263 - 264 

  38. CMA, Online platforms and digital advertising, 2020, pp.263 - 264 

  39. CMA, Online platforms and digital advertising, 2020, p. 264 

  40. In England and Wales 

  41. Office for National Statistics (ONS): The Telephone-operated Crime Survey for England and Wales (TCSEW), year ending March 2021. 

  42. Office for National Statistics (ONS): The Telephone-operated Crime Survey for England and Wales (TCSEW), year ending June 2021 

  43. ONS, CSEW, year ending March 2020 

  44. Source: Fraud Enabled by Online Adverts 2020/21 – Dip Sample report. These figures were based on key word searches across the Action Fraud databases, and a small dip sample of the resulting reports 

  45. NEDA, n.d., BODY IMAGE 

  46. Mond et al., 2013, Quality of life impairment associated with body dissatisfaction in a general population sample of women; Mond et al., 2010, Obesity, body dissatisfaction and emotional well-being in early and late adolescence: findings from the Project EAT study; E. Martin-Rodriguez, F. Guillen-Grima, E. Aubá, A. Martí, A. Brugos-Larumbe, 2016, Relationship between body mass index and depression in women: A 7-year prospective cohort study; Swami V, Weis L, Barron D, Furnham A., 2018, Positive body image is positively associated with hedonic (emotional) and eudaimonic (psychological and social) well-being in British adults