Policy paper

Centre for Data Ethics and Innovation: Review of online targeting

Published 7 May 2019

1. Terms of Reference: Online targeting

Data driven technologies and the internet provide the tools to target content, products and services to different individuals at massive scale and relatively low cost in highly sophisticated ways, monitor outcomes and iterate and improve on targeting approaches in real-time. These tools - and the breadth and depth of data commonly held about people and their online behaviours - distinguish online targeting from the relatively broader kind of targeting of populations that has been used for years.

The purpose of this review is to analyse the use of online targeting approaches and to make practical recommendations to Government, industry and civil society for how online targeting can be conducted and governed in a way that facilitates the benefits and minimises the risks it presents.

Our definition of online targeting centres around the customisation of products and services online (including content, services standards and prices) based on data about individual users. Instances of online targeting can include online advertising and personalised social media feeds and recommendations.

1.1 What is the issue?

Online targeting approaches are used across different sectors to help people navigate the web and provide them with relevant and engaging content on a personalised basis. However, they can also pose some risks.

The way targeting works is complex and levels of public understanding and scrutiny of targeting practices are generally low. It relies on complex and largely opaque flows of data which are difficult for individuals to understand, let alone control, and might risk undermining data protection and privacy rights. And if this data is concentrated in certain organisations, this could potentially have an adverse impact on competition in digital markets.

Targeting impacts the information people see and the choices they are given at a time when our lives are moving increasingly online. And it often aims to influence people’s behaviours - encouraging them to click through to targeted content, watch recommended videos, purchase targeted products, and so on. This can be highly compelling, and in some cases might cross the line between legitimate persuasion and illegitimate manipulation.

This risk appears particularly concerning in the case of vulnerable people - such as children, people with addictions, people with poor mental health, people experiencing more temporary vulnerabilities such as the recently bereaved, and other types of vulnerability potentially unique to the online space.

Furthermore, the targeting of disinformation may affect people’s ability to discern the trustworthiness of news and advertising content and so to make well-informed decisions as citizens and consumers.

1.2 What will our focus be?

In this review, we are focusing on the impacts of online targeting on individuals, organisations, and society. In particular, we are considering how targeting approaches can undermine or reinforce the concept of autonomy - that is, our ability to make choices freely and based on information that is as full and complete as reasonably possible. We are also focusing on whether the impacts of online targeting practices might be experienced more profoundly by vulnerable people, and whether it might contribute to a reduction in the reliability of news and advertising content we see online. Finally, we are looking at the data and mechanisms involved in online targeting and how these impact privacy and data protection principles.

In doing so, we are aiming to complement ongoing work delivered by others, such as the Department for Digital, Culture, Media and Sport’s workstreams on online harms and online advertising, and the Information Commissioner’s work on advertising technology, online political advertising, and age-appropriate design for online services.

We aim to explore the benefits and harms of online targeting, exploring the trade offs between the two and how people might experience them differently. We will look to identify opportunities for improved governance and emerging technological developments to facilitate the benefits and minimise the risks of harms presented by online targeting approaches. This includes looking at the potential for online targeting approaches - with the appropriate safeguards - to be used for beneficial purposes, such as supporting people’s decision making.

1.3 How will we work?

The review will be delivered through two stages. In the first stage, we will develop an analysis of “gaps” in the governance of online targeting, by collating research on current targeting practices, the benefits and harms caused to individuals, organisations and society by online targeting, and the current regulatory and legal position towards it, including self and voluntary regulatory practices. We will also conduct an extensive public dialogue exercise to understand peoples’ attitudes towards online targeting practices and the impacts of these on them and their communities. Through this work, we will explore what principles members of the public would like to be incorporated into potential solutions, and identify gaps between public expectations of governance and what current governance regimes actually entail.

In the second stage, we will develop and recommend governance and, where relevant, other types of solutions, focused on areas and audiences identified during stage one.

We plan to engage with stakeholders from industry, civil society, academia, and regulators and government throughout.

1.4 What types of outputs will we produce?

We are aiming to produce outputs throughout the duration of the review. These may include: results of public engagement around online targeting practices; analysis of the strengths and weaknesses of governance frameworks that regulate the use of online targeting approaches; and recommendations to government, regulators, and industry for improved governance of online targeting practices in certain sectors.

1.5 What are our timelines?

We are aiming to produce outputs throughout the duration of the review. In particular, an interim report will be published by Summer 2019, and a final report, including recommendations to government and others, by December 2019.

2. Open call for evidence

As part of our review into online targeting, we are taking submissions via an open call for evidence in relation to online targeting practices and their impacts on people, organisations, and society.

We are particularly interested in hearing from a broad range of stakeholders working in or specialising in online targeting. These include, but are not limited to: online platforms and technology companies; start-ups, providers, developers and purchasers of targeting and personalisation solutions; digital marketing professionals; ad-tech companies; data analytics providers and consultants; data brokers; civil society organisations focused on consumer rights, digital rights, privacy and data protection, and vulnerable people; academics, research and policy organisations; regulators and government departments; international standards, regulation, and governance bodies.

We also encourage members of the public, who may not clearly fit into these categories, but who have been personally impacted by online targeting, to get in touch.

3. Questions

We are particularly interested in considering the following questions:

1. What evidence is there about the harms and benefits of online targeting?

  • What evidence is there concerning the impact - both positive and negative - of online targeting on individuals, organisations, and society? In particular:

    • Its impact on our autonomy
    • Its impact on vulnerable or potentially vulnerable people
    • Its impact on our ability to discern the trustworthiness of news and advertising content online
    • The impact on privacy and data protection rights of the collection, processing and sharing of data that underpins online targeting
  • What opportunities are there for targeting and personalisation to further benefit individuals, organisations, and society?

2. How do organisations carry out online targeting in practice?

  • What do organisations use online targeting for? What are the intended outcomes?
  • What are the key technical aspects of online targeting? (what data is used; how is it collected, processed, and/or shared; what customisation is carried out; on what media; and how is this monitored, evaluated, and iterated on)?
  • How is this work governed within organisations (how are risks monitored and mitigated, including unintended consequences, who is accountable within organisations, how are complaints dealt with)?

3. Should online targeting be regulated, and if so, how should this be done in a way that maximises the benefits and minimises the risks targeting presents?

  • What is the current legal and regulatory environment around online targeting in the UK? How effective is it?
  • How significant are the burdens placed on organisations by this environment?
  • Are there laws and regulations designed for the “analogue” world that should be applied to online targeting in the UK?
  • Are there any international examples of regulation and legislation of online targeting that we can learn from?

4. How is online targeting evolving over time, what are the likely future developments in online targeting, and do these present novel issues?

  • What emerging technologies might change the way that online targeting is carried out? Might these present novel issues?
  • How might existing and emerging governance regimes (such as the General Data Protection Regulation, European e-Privacy and e-Commerce Directives, and potential Online Harms legislation) impact online targeting practices?
  • Are there examples of types of online targeting and personalisation (that might have either negative or positive impacts) that are currently possible but not taking place? If so, why are they not taking place?

We welcome written submissions in Word format of 2,000 words or under. Please email to policy@cdei.gov.uk.

The deadline for responses is 14 June 2019.

We will publish a summary of responses over the Summer.

In your response, please clarify:

  • If you are responding on behalf of an organisation or in a personal capacity
  • Whether you are responding to the call for evidence for the algorithmic bias review, the online targeting review, or both
  • Which questions you are answering by referring to our numbering system. There is no need to respond to all of the questions if they are not all relevant to you
  • Whether you are willing to be contacted (in which case, please provide contact details)
  • Whether you want your response to remain confidential for commercial or other reasons. If you prefer to engage in person please specify this. We will try our best, resource-allowing, to find opportunities to do this.

Information provided in response to this call for evidence, including personal information, may be published or disclosed in accordance with the access to information regimes (these are primarily the Freedom of Information Act 2000 (FOIA), the General Data Protection Regulations (GDPR), and the Environmental Information Regulations 2004).

If you want the information that you provide to be treated as confidential, please be aware that, under the FOIA, there is a statutory Code of Practice with which public authorities must comply and which deals, amongst other things, with obligations of confidence. In view of this it would be helpful if you could explain to us why you regard the information you have provided as confidential.

If we receive a request for disclosure of the information we will take full account of your explanation, but we cannot give an assurance that confidentiality can be maintained in all circumstances. An automatic confidentiality disclaimer generated by your IT system will not, of itself, be regarded as binding.

We will process your personal data in accordance with the GDPR.