Policy paper

Summary factsheet: Understanding how platforms with video-sharing capabilities protect users from harmful content online

Published 3 August 2021

DCMS has commissioned consultants EY to research the measures that platforms with video-sharing capabilities take to protect users online. This review has been undertaken in the context of new regulation requiring platforms to take appropriate measures to protect their users from certain types of harmful content online.

To form their view, EY undertook two key areas of market research. Firstly, they designed and administered three separate consumer surveys focused on children, teenagers and adults to understand how they use platforms with video-sharing capabilities and their awareness of online harms. Secondly, they interviewed seven, and surveyed 12, online platforms with video-sharing capabilities to better understand how sophisticated their measures are and the costs they incur to enforce those measures.

For the purposes of explaining the findings, ‘small’ platforms refer to those having less than 100k unique global users, ‘medium’ platforms as those having between 100k and 10m unique global users, and ‘large’ platforms as having more than 10m unique global users. A unique global user is defined as a person who actively engaged with the platform during the year, whether to watch a video or undertake some other unrelated activity.

The Audiovisual Media Services Directive and how it applies to platforms with video-sharing capabilities

The EU’s Audiovisual Media Services Directive (AVMSD) is the regulatory framework that governs Member States’ national legislation on audiovisual media. Recent updates to the AVMSD have extended the scope of regulation to video-sharing platforms (VSPs) for the first time.

Broadly, VSPs are a type of online service that allow users to upload and share videos. The AVMSD, as transposed into domestic UK law, now requires VSP providers to take appropriate measures to protect under-18s from potentially harmful material and to protect the general public from incitement to hatred or violence and other specific material the inclusion of which would be a criminal offence. Services also need to ensure certain standards around advertising are met. Ofcom is the UK’s national regulatory authority for the VSP regime.

The government intends for requirements on UK-established VSPs to be superseded by the Online Safety regime, once the latter comes into force. Both the VSP regime and the Online Safety regime share broadly the same objectives, and align well with UK policy goals. The draft Online Safety Bill, which has introduced the Online Safety framework, was published on 12 May 2021. While other countries have introduced legislation to address specific types of harm, the UK’s Online Safety regime is the first attempt globally to address a spectrum of online harms through a single regulatory approach.

Ultimately, the VSP regime will provide protections in the nearer term and is providing a solid foundation to inform and develop the future Online Safety framework. Ofcom will ensure there is support in place for UK-established VSPs when transitioning from one regime to the next.

How the sector for platforms with video-sharing capabilities has changed in the last few years

EY’s research found that UK consumers watched 213 billion videos in 2020. The average time adult users spend watching videos across all platforms has grown to 32 minutes per week, an increase of 8% per year or 8 minutes in total since 2017.

The analysis shows that despite the entrance of some new platforms, the sector continues to be highly concentrated, with users preferring a small number of platforms in order to view videos online.

Only a small number of platforms have entered the sector and achieved scale in recent years. In general, new entrants have only achieved scale where they have been able to offer users highly differentiated experiences and features, such as live streaming and live chat.

Measures platforms currently employ to protect their users from harmful content online

The measures employed by different platforms to protect their users from harmful content can vary, but can include acceptable use policies, community guidelines, age assurance, age verification, parental controls, user prompts, content moderation, mechanisms for users to flag violative content, and transparency reports published to disclose a range of information relating to content that has been reported to a platform’s moderators.

EY’s research suggests that, rather than focusing on individual measures, risk assessments need to be carried out to ensure the suite of measures in place are in line with the specific risks on the platform.

Platforms that consider themselves likely to be accessed by children tend to report having more effective measures in place to protect their users from harmful content online.

All platforms EY spoke with explicitly stated that illegal content on their platforms is banned. Most platforms employ the use of industry wide resources, such as content and image scanning software, to prevent the spread of child sexual abuse material.

How much platforms spend on measures to protect their users from harmful content online

Most platforms included in EY research suggested that the AVMSD would not lead to any incremental investment requirements. Only one platform suggested it may invest in age verification and age assurance technologies as a result of the AVMSD, which it was unlikely to have done otherwise.

Annual spend on measures varies by platform size, with small platforms spending up to £2.5m, medium platforms between £0.3m and £15m, and large platforms between £7m and £1.5bn. Some very large platforms may spend significantly more than this.

Transparency reporting can be particularly challenging for some platforms who may not routinely collect this type of data. If all platforms are required to produce transparency or compliance reports, some believe they may need to redirect investment from operations to compliance reporting.

Barriers platforms face when implementing measures to protect their users from harmful content online

Overall, the platforms included in EY’s research noted that complexity or understanding of regulatory requirements was the most significant barrier to effectively protecting users online.

Platforms told EY that changing user expectations are a critical factor influencing the steps platforms take to protect their users online. Due to this, as users’ media literacy improves, their expectations of the measure’s platforms should have in place are likely to increase, incentivising the platforms to further invest in online safety.

User experiences of harmful content online

45% of adults and 35% of 13–17 year olds in EY’s research stated that they make a judgement on the appropriateness of the video after they have started watching it. This approach could risk users being accidentally exposed to harmful content. Additionally, most young children stated that they watch videos without supervision and are attracted to sites designed for older children. This highlights the need for platforms to consider the potential for their content to be accessed by children.

EY found that exposure to harmful content on platforms with video-sharing capabilities is widespread across all age groups but particularly pronounced among under 18s. Some types of content could be more immediately harmful than others. For example, some 6–12 year olds have been exposed to nudity, videos of people hurting themselves or others, and people encouraging them to undertake a dangerous challenge, while half of 13–17 year olds have unexpectedly seen unsuitable or graphic sexual imagery in the past three months.

How people respond to measures taken by platforms to keep them safe online

59% of UK adults agreed with the statement: “Video-sharing platforms are not doing enough to keep illegal content off their platforms’’.

People have differing attitudes to harmful content online and the appropriate response to such content. The measures employed by platforms need to balance the expectations and priorities of different users.