Guidance

Online Safety Act: explainer

Published 8 May 2024

What the Online Safety Act does 

The Online Safety Act 2023 (the Act) is a new set of laws that protects children and adults online. It puts a range of new duties on social media companies and search services, making them more responsible for their users’ safety on their platforms.

The Act will give providers new duties to implement systems and processes to reduce risks their services are used for illegal activity, and to take down illegal content when it does appear.

The strongest protections in the Act have been designed for children and will make the UK the safest place in the world to be a child online. Platforms will be required to prevent children from accessing harmful and age-inappropriate content and provide parents and children with clear and accessible ways to report problems online when they do arise.

The Act will also protect adult users, ensuring that major platforms will need to be more transparent about which kinds of potentially harmful content they allow, and give people more control over the types of content they want to see.

Ofcom is now the independent regulator of Online Safety. It will set out steps providers can take to fulfil their safety duties in codes of practice. It will have a broad range of powers to assess and enforce providers’ compliance with the framework.

Providers’ safety duties are proportionate to factors including the risk of harm to individuals, and the size and capacity of each provider. This makes sure that while safety measures will need to be put in place across the board, we aren’t requiring small services with limited functionality to take the same actions as the largest corporations. Ofcom is required to take users’ rights into account when setting out steps to take. And providers have simultaneous duties to pay particular regard to users’ rights when fulfilling their safety duties.

The Act also introduced some new criminal offences – details are set out below.

Who the Act applies to 

The Act’s duties apply to search services and services that allow users to post content online or to interact with each other. This includes a range of websites, apps and other services, including social media services, consumer file cloud storage and sharing sites, video sharing platforms, online forums, dating services, and online instant messaging services.

The Act applies to services even if the companies providing them are outside the UK should they have links to the UK. This includes if the service has a significant number of UK users, if the UK is a target market or it is capable of being accessed by UK users and there is a material risk of significant harm to such users.

How the Online Safety Act is being implemented

The Act passed into law on 26 October 2023. Now work is being carried out to bring its protections into effect. 

Ofcom is leading work to implement the Act’s provisions and are taking a phased approach to bringing duties into effect. Government also needs to make secondary legislation in some areas to enable elements of the framework.

The Act requires Ofcom to develop guidance and codes of practice that will set out how online platforms can meet their duties. Ofcom will carry out public consultations on draft codes of practice before finalising them. Ofcom has already begun this work; the main phases are set out below:

Duties about illegal contentOfcom has published draft codes of practice and guidance for consultation. The consultation closed on 23 February 2024. We expect these duties to begin to be in effect from early 2025. 

Duties about content harmful to childrenOfcom has published draft guidance for consultation about use of age assurance to prevent children accessing online pornography. The consultation closed on 5 March 2024. Ofcom has also published draft codes of practice and guidance about protecting children from harmful content such as promotion of self-harm or suicide on 8 May 2024.

Duties for categorised services – some platforms will have to comply with additional requirements to protect users. The Act created categories of service and during implementation the thresholds will be defined to determine which services fall into which categories (Category 1, 2A or 2B). Ofcom has published advice to government on how to set the thresholds for these categories and issued a call for evidence on 25 March 2024, (closing 20 May 2024). Once the thresholds have been set in regulations by government, Ofcom will publish a register setting out which services fall into which categories and will publish further codes of practice and guidance for consultation. 

New offences introduced by the Act

The criminal offences introduced by the Act came into effect on 31 January 2024. These offences cover: 

  • encouraging or assisting serious self-harm
  • cyberflashing
  • sending false information intended to cause non-trivial harm
  • threatening communications
  • intimate image abuse
  • epilepsy trolling

These new offences apply directly to the individuals sending them, and convictions have already been made under the cyberflashing and threatening communications offences. 

Some of these offences will be further bolstered when the Criminal Justice Bill completes its passage through Parliament.

Types of content that the Act tackles

Illegal content

The Act requires all companies to take robust action against illegal content and activity. Platforms will be required to implement measures to reduce the risks their services are used for illegal offending. They will also need to put in place systems for removing illegal content when it does appear. Search services will also have new duties to take steps to reduce the risks users encounter illegal content via their services.

The Act sets out a list of priority offences. These reflect the most serious and prevalent illegal content and activity, against which companies must take proactive measures.

Platforms must also remove any other illegal content where there is an individual victim (actual or intended), where it is flagged to them by users, or they become aware of it through any other means.

The illegal content duties are not just about removing existing illegal content; they are also about stopping it from appearing at all. Platforms need to think about how they design their sites to reduce the likelihood of them being used for criminal activity in the first place.

The kinds of illegal content and activity that platforms need to protect users from are set out in the Act, and this includes content relating to:

  • child sexual abuse  
  • controlling or coercive behaviour  
  • extreme sexual violence 
  • extreme pornography 
  • fraud
  • racially or religiously aggravated public order offences  
  • inciting violence  
  • illegal immigration and people smuggling  
  • promoting or facilitating suicide  
  • intimate image abuse
  • selling illegal drugs or weapons  
  • sexual exploitation  
  • terrorism 

Content that is harmful to children 

Protecting children is at the heart of the Online Safety Act. Although some content is not illegal, it could be harmful or age-inappropriate for children and platforms need to protect children from it. 

Companies with websites that are likely to be accessed by children need to take steps to protect children from harmful content and behaviour.

The categories of harmful content that platforms need to protect children from encountering are set out in the Act. Children must be prevented from accessing Primary Priority Content, and should be given age-appropriate access to Priority Content. The types of content which fall into these categories are set out below.

Primary Priority Content

  • pornography
  • content that encourages, promotes, or provides instructions for either:
    • self-harm
    • eating disorders or
    • suicide

Priority Content

  • bullying
  • abusive or hateful content
  • content which depicts or encourages serious violence or injury
  • content which encourages dangerous stunts and challenges; and
  • content which encourages the ingestion, inhalation or exposure to harmful substances.

Age-appropriate experiences for children online

The Act requires social media companies to enforce their age limits consistently and protect their child users. 

Services must assess any risks to children from using their platforms and set appropriate age restrictions, ensuring that child users have age-appropriate experiences and are shielded from harmful content. Websites with age restrictions need to specify in their terms of service what measures they use to prevent underage access and apply these terms consistently. 

Different technologies can be used to check people’s ages online. These are called age assurance technologies.

The new laws mean social media companies will have to say what technology they are using, if any, and apply these measures consistently. Companies can no longer say their service is for users above a certain age in their terms of service and do nothing to prevent younger children accessing it.

Adults will have more control over the content they see

Major user to user online platforms (Category 1) will be required to offer adult users tools to give them greater control over the kinds of content they see and who they engage with online. This includes giving them the option of filtering out unverified users, which will help stop anonymous trolls from contacting them.

Adult users will also be able to verify their identity and access tools which enable them to reduce the likelihood that they see content from non-verified users, and prevent non-verified users from interacting with their content.

Following the publication of guidance by Ofcom, major platforms will need to proactively offer adult users optional tools to help them reduce the likelihood that they will encounter certain types of content. These categories of content are set out in the Act and includes content that does not meet a criminal threshold but encourages, promotes or provides instructions for suicide harm or eating disorders. These tools also apply to abusive or hate content including racist, antisemitic, homophobic or misogynist content. The tools must be effective and easy to access.

The Act already protects children from seeing this content.

The Act will tackle suicide and self-harm content 

Any site that allows users to share content or interact with each other is in scope of the Online Safety Act. These laws also require sites to rapidly remove illegal suicide and self-harm content and proactively protect users from content that is illegal under the Suicide Act 1961. The Act has also introduced a new criminal offence for encouraging or assisting serious self-harm.

Services that are likely to be accessed by children must prevent children of all ages from encountering legal content that encourages, promotes or provides instruction for suicide and self-harm. 

The Act also requires major services (Category 1 services) to uphold their terms of service where they say they will remove or restrict content or suspend users. If a service says they prohibit certain kinds of suicide or self-harm content the Act requires them to enforce these terms consistently and transparently. These companies must also have effective reporting and redress mechanisms in place enabling users to raise concerns about companies’ enforcement of their terms of service, if users feel that companies are not fulfilling their duties.

How the Act will be enforced

Ofcom is now the regulator of online safety and must make sure that platforms are protecting their users. Once the new duties are in effect, following Ofcom’s publication of final codes and guidance, platforms will have to show they have processes in place to meet the requirements set out by the Act. Ofcom will monitor how effective those processes are at protecting internet users from harm. Ofcom will have powers to take action against companies which do not follow their new duties.

Companies can be fined up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater. Criminal action can be taken against senior managers who fail to ensure companies follow information requests from OfcomOfcom will also be able to hold companies and senior managers (where they are at fault) criminally liable if the provider fails to comply with Ofcom’s enforcement notices in relation to specific child safety duties or to child sexual abuse and exploitation on their service.

In the most extreme cases, with the agreement of the courts, Ofcom will be able to require payment providers, advertisers and internet service providers to stop working with a site, preventing it from generating money or being accessed from the UK.

How the Act affects companies that are not based in the UK 

The Act gives Ofcom the powers they need to take appropriate action against all companies in scope, no matter where they are based, where services have relevant links with the UK. This means services with a significant number of UK users or where UK users are a target market, as well as other services which have in-scope content that presents a risk of significant harm to people in the UK.

How the Act tackles harmful algorithms

The Act requires providers to specifically consider how algorithms could impact users’ exposure to illegal content – and children’s exposure content that is harmful to children – as part of their risk assessments.

Providers will then need to take steps to mitigate and effectively manage any identified risks. This includes considering their platform’s design, functionalities, algorithms, and any other features likely to meet the illegal content and child safety duties.

The law also makes it clear that harm can arise from the way content is disseminated, such as when an algorithm repeatedly pushes content to a child in large volumes over a short space of time.

Some platforms will be required to publish annual transparency reports containing online safety related information, such as information about the algorithms they use and their effect on users’ experience, including children.

How the Act protects women and girls

The most harmful illegal online content disproportionately affects women and girls, and the Act requires platforms to proactively tackle this. Illegal content includes harassment, stalking, controlling or coercive behaviour, extreme pornography, and revenge pornography.

All user-to-user and search services have duties to put in place systems and processes to remove this content when it is flagged to them. The measures companies must take to remove illegal content will be set out in Ofcom’s codes of practice.

When developing these codes, Ofcom is required to consult with the Victim’s Commissioner and Domestic Abuse Commissioner to guarantee that the voices and views of women, girls and victims are reflected. 

The Act also requires Ofcom to produce guidance that summarises in one clear place the measures that can be taken to tackle the abuse that women and girls disproportionately face online. This guidance will ensure it is easy for platforms to implement holistic and effective protections for women and girls across their various duties.

Government is continuing to strengthen the protections for women and girls online beyond the Online Safety Act, and recently announced that it would amend the Criminal Justice Bill to make maliciously creating a sexually explicit deepfake of an adult a specific criminal offence.

Independent Review of Pornography Regulation, Legislation and Enforcement

The past two decades have seen a dramatic change in the way we consume media and interact with content online. We need to ensure pornography regulation and legislation reflects this change.  

Separate to the Online Safety Act, the government announced the Independent Pornography Review to assess the regulation, legislation and enforcement of online and offline pornographic content. It investigates how exploitation and abuse is tackled in the industry and examines the potentially harmful impact of pornography. This review will help ensure the laws and regulations governing a dramatically changed pornography industry are once again fit for purpose.

In line with the Terms of Reference, the Review is looking to publish a report with recommendations for government by the end of Summer 2024.

Background information: useful websites

Legislation 

Notices  

Consultations and other publications (including relevant Ofcom consultations)