Press release

What's changing for children on social media from 25 July 2025

New laws come into force, protecting under-18s from harmful online content.

From 25 July, the way children experience the internet will fundamentally change, as new laws come into force, protecting under-18s from harmful content they shouldn’t ever be seeing. This includes:

  • pornography
  • self-harm
  • suicide
  • hate speech
  • violence

Children will have to prove their age to access the most harmful material on social media and other sites, with platforms having to use secure methods like facial scans, photo ID and credit card checks to check the age of their users. This means it will be much harder for under-18s to accidentally or intentionally access harmful content.

A thousand platforms have confirmed to Ofcom they’ve got these checks in place, including the most visited porn site in the UK, PornHub.

It comes as Ofcom figures show that children as young as 8 have accessed pornography online, and 16% of teenagers report seeing material that stigmatises body types or promotes disordered eating in the last 4 weeks.

Children will also see fewer harmful posts and videos in their feeds, with platforms required to make sure their algorithms aren’t feeding children content that promotes harmful behaviours like bullying, hate speech or dangerous online challenges.

And when harmful content does appear, platforms will need to act quickly to remove it. If children are seeing something harmful or inappropriate, it will be easier to find help and report it.

Technology secretary Peter Kyle said:

Our lives are no longer split between the online and offline worlds – they are one and the same. What happens online is real. It shapes our children’s minds, their sense of self, and their future. And the harm done there can be just as devastating as anything they might face in the physical world.

We’ve drawn a line in the sand. This government has taken one of the boldest steps anywhere in the world to reclaim the digital space for young people – to lay the foundations for a safer, healthier, more humane place online.

We cannot – and will not – allow a generation of children to grow up at the mercy of toxic algorithms, pushed to see harmful content they would never be exposed to offline. This is not the internet we want for our children, nor the future we are willing to accept.

The time for tech platforms to look the other way is over. They must act now to protect our children, follow the law, and play their part in creating a better digital world.

And let me be clear: if they fail to do so, they will be held to account. I will not hesitate to go further and legislate to ensure that no child is left unprotected.

Enforcement action from the regulator

From 25 July these protections will be fully enforceable and services that don’t comply could face serious enforcement action from Ofcom including fines. Enforcement action can be 10% of the companies’ qualifying global annual revenues or £18 million, whichever is greater.

Action platforms will legally have to take

Block access to harmful content 

Starting from 25 July, platforms that host pornography, or content which encourages self-harm, suicide or eating disorder content will have to put in place robust age-checks. This means: 

  • using highly effective age assurance, like facial age estimation, photo-ID matching, or credit card checks to verify age more reliably; and 
  • stopping children encountering harmful content on the site – either by age restricting parts of the platform or blocking access to the site by under 18s 
  • this will create extra steps when creating a new account or attempting to access content not appropriate for children.
  • in practice, this is like a child not being able to sign up for a credit card, or buy alcohol, and means that children will encounter fewer instances of harmful content and have a more age-appropriate experience online 

Provide safer feeds and fewer toxic algorithms 

The codes set out how platforms can reduce toxic algorithms which we know can recommend harmful content to children without them seeking it out. This includes ensuring that algorithms do not operate in a way that harms children, such as by pushing content related to suicide, self-harm, eating disorders, and pornography. That means fewer risky rabbit holes and more control over what children see on their feeds. 

Take faster action on harmful content 

Platforms will need more robust content moderation systems to take swift action against content that is harmful to children when they become aware of it. Search engines should filter out the most harmful content for children, for example by using a ‘safe search’ setting for children, which can’t be turned off.

User support 

Platforms will also be required to ensure they provide clear and easy-to-find information for children, and the adults who care for them. This will include easy-to-use reporting and complaints processes, as well as tools and support for children to help them stay safe online. 

Types of ‘harmful content’ the codes apply to

Platforms which host pornography, or the most harmful content to children and are likely to be accessed by children, must implement highly effective age assurance to prevent children from accessing said content. 

This content is described as primary priority content and includes: 

  • pornography, and
  • content that encourages, promotes, or provides instructions for:
    • self-harm
    • suicide
    • eating disorders 

Wider harmful content is known as priority content. The codes instruct platforms to protect children from this content by providing age-appropriate experiences. This category of content includes:

  • bullying
  • abusive or hateful content, and
  • content which encourages:
    • or depicts serious violence or injury
    • dangerous stunts and challenges
    • the ingestion, inhalation or exposure to harmful substances

DSIT media enquiries

Email press@dsit.gov.uk

Monday to Friday, 8:30am to 6pm 020 7215 3000

Updates to this page

Published 24 July 2025