Growing up in the online world: a national conversation
Published 2 March 2026
Growing up in the online world: a national conversation
Presented to Parliament by the Secretary of State for Science, Innovation and Technology by Command of His Majesty
March 2026
CP 1528
© Crown copyright 2026
ISBN 978-1-5286-6278-9
E03557625 03/26

National conversation to keep children safe online.
Secretary of State Foreword
Every child deserves the strongest possible start in life. No matter where you come from, what your parents do, or any other accident of birth – the surest path to a good life is a great childhood. One full of friends, family, love, learning, play and adventure.
That applies just as much to the online world as it does to the real one. We know technology can open up huge opportunities for young people. I want to see children live happy, healthy lives, enriched by their digital experiences – using technology to learn, to create, and to connect with those who love the same things as them, and love in the same way. Online communities are often the first place where children start to feel connected to something bigger than themselves.
Yet in the age of smartphones, children’s lives are changing fast – sometimes quicker than families, schools or support services can respond. We know parents everywhere are grappling with how much screentime their children should have, when they should give them a phone, what they are seeing online, and the impact all of this is having. They worry about AI, and about their kids talking to chatbots as if they’re real people. Yet almost every child growing up today will need digital skills in the future, whether for their social life, for school, or for a job that hasn’t yet been invented. We are determined to help families deal with these issues – so we can keep children safe, give them the childhood they deserve, and prepare them for the future.
That is why we are launching this consultation. Your responses will inform the next steps we will take as a government. We know there are differences of opinion about a possible social media ban, overnight curfews, and other measures to protect children’s wellbeing. That is why we want to gather all the evidence – from parents, carers, experts, non-experts, tech companies, and from children and young people themselves. So please do tell us what you think.
Keeping children safe online is a responsibility we all share. But it is also a personal priority for me. My very first act as Secretary of State was to strengthen the UK’s Online Safety Act, and I have pushed to keep improving it ever since. We are determined to follow all the evidence. And last month the Prime Minister announced that the government is taking new legal powers so that we can act immediately to implement the findings of this consultation – within months, not years.
This is not the end of the road. This consultation is the next step, but we will continue to keep the rules for tech companies under review and will not hesitate to make whatever changes are needed to keep our children safe. Together, we are shaping a digital world that reflects our values, protects our children, and helps them to grow, learn, find joy, and participate fully in modern life.
The Rt Hon Liz Kendall MP
Secretary of State for Science, Innovation and Technology
Introduction
Our ambition is to build a country in which every child can start life with the opportunities they deserve. As a society there is a broad consensus around the fundamentals of what makes a good childhood; love, play, friendship, curiosity, support from parents and loved ones, education, and the chance to thrive. Last year’s early years strategy, Giving Every Child the Best Start in Life, sets out how we will build a stronger, fairer society by focusing on our children’s earliest years, when the foundations of wellbeing and opportunity are first laid. As outlined in the National Youth Strategy, we will continue to support young people as they grow. No matter their background all our children should have the skills, opportunities and connections to thrive and be empowered to shape their own lives and the community around them.
However, as a country we have not yet fully grappled with how we give children and young people the childhood they deserve and prepare them for the future in an age of rapid technological change. That is the conversation that this government is launching, both through this consultation and through a series of structured engagements with parents, children and experts over the course of the next 3 months. Our aim is to enrich children’s lives, both by encouraging time spent away from screens in healthy, real-world environments, and by ensuring that the time they spend online is developing their creativity, skills and social engagement.
In a world where technology now touches many families’ lives from the beginning, we must ensure the safest possible start online. This consultation is the next step in delivering on this promise and the government is committed to taking decisive, swift action, based on its findings.
It is no longer accurate or meaningful to present children’s lives as divided into online and offline spheres. So much of our life is lived online and we need to make sure it is safe and enriching for our children, just as we are doing in the real world. Digital technologies can enrich and expand children’s opportunities to learn, communicate and create, but also bring risks and challenges. We also need to prepare and equip our children to navigate a future shaped by fast-moving technological innovation, ready for workplaces and societies that will be digital by default.
Through the Online Safety Act 2023, Parliament delivered one of the most robust systems globally to protect children from illegal and harmful content and activity. Since this regime has been in force, Ofcom – the independent regulator of the Act – has already opened investigations into companies responsible for over 90 services, including into a major social media platform, to determine whether it has failed to comply with the illegal content duties; file-sharing services for measures to prevent the sharing of child sexual abuse material (CSAM); and a notorious pro-suicide forum.
This legislation will remain the foundation of our work on online safety; it forms a strong baseline from which this government can build and any new measures will align with this existing framework. At the same time, it can and must continue to evolve. All regulatory regimes need to remain agile and it is even more critical that we are able to act swiftly in the fast-moving world of technology. The government is continuing, alongside this consultation, to take further steps to protect users online. We have announced that we will legislate to ensure that platforms remove intimate images shared without consent within a maximum of 48 hours, providing further protections for women and girls online. This is in addition to changes already announced, including making online content that promotes self-harm and suicide a ‘priority offence’ under the Online Safety Act so platforms must take proactive steps to stop users seeing this material in the first place, and swiftly take it down if it appears. We have also made intimate image abuse and cyberflashing priority offences and introduced an offence in the Crime and Policing Bill to criminalise AI models which have been optimised to create child sexual abuse material. Finally, we will criminalise nudification apps so that it is illegal for companies to supply tools designed to create non-consensual intimate images, targeting the problem at its source.
While it is vital that Ofcom acts swiftly to implement any changes to the regulatory regime, ultimately, responsibility must lie with online platforms to develop safe online environments for UK children that reflect UK values and comply with UK law.
Alongside the Act, this consultation represents the next stage of online regulation; going beyond illegal and harmful content to consider the impact of technologies on the day to day lives of children. We want to understand how the use of technology both supports and hinders the building of strong foundations for our children, and how the environments children find themselves in online allow them to safely connect with others, spending time with their friends and with peers with similar interests or in similar situations. We want children to be able to express themselves freely amongst those friends without being put at risk of harm.
We have heard clearly the concerns from parents, teachers, those that work with children and young people, and – crucially – children themselves, that the status quo is not delivering on this. These worries go beyond the content that children see and include concerns about the length of time spent on devices, including what sort of activities this time displaces. Children deserve online spaces that strengthen their wellbeing and support their growth.
We know that parents, carers and families are doing their best to guide children through an online world that feels increasingly difficult to navigate. They deserve clearer protections, better information, and platforms that work with them – not against them – in keeping children safe. This is why we will be considering options to address concerns that the business model of certain apps, services and platforms are designed to keep users, including children, online for longer. We want to give parents, carers and families the assurance that children can enjoy the best of what the digital world offers: spaces that spark imagination, build skills and help them flourish. Safe, joyful online experiences which help children build their friendships in communities of like-minded peers should be part of every childhood.
Much of the conversation in recent months has focused on a ban on social media for under 16s. This is a key focus of this consultation, alongside other options which have been proposed to improve children’s safety and to strengthen the legislation in areas where existing powers have limitations. The aim of this consultation is to understand the relative merits of these different options, including the age any restrictions should apply to. We also want to understand more about the unintended consequences of restrictions; it is vital that any new approach reduces overall harm to children, rather than encouraging children to visit less well-regulated or less-visible sites. Central to this will be determining which services or functionalities any restrictions should apply to.
Box 0.Children tell us that they derive significant benefits from being online. Ofcom’s 2025 Parents and Children: Media Use and Attitudes Report[footnote 1] found that almost all (99%) 13–17-year-olds reported benefits from being online including:
- helping with schoolwork or homework (78%)
- helping to build or maintain friendships (65%)
- finding useful information about any problems they may have (59%)
- learning a new skill (55%)
We owe it to children and families to find solutions that keep pace with technology, are properly enforceable, where the consequences and implications are properly thought through and where we are confident that they deliver for all children, including the most vulnerable in our society. We need solutions that add to, rather than take away, from their childhoods.
This consultation will look at 3 main areas. First, it will examine whether new measures, including age-based restrictions, could help keep children safer online. Second, it will consider the role of enforcement, both in applying any new rules and in strengthening the ones that already exist. And third, it will look at the wider ecosystem: how children and parents can best be supported to get information about staying safe online and develop skills that set them up for the future.
In addition, alongside this consultation, we are running short, sharp pilots to test a range of different interventions aimed at 13–15-year-olds, including a social media ban, curfews and defined daily limits on social media. These pilots will inform our decisions by showing us the implementation challenges of these measures and telling us more about the impact they could have. This will help to ensure we deliver restrictions that are workable for children and their families.
At the same time, whatever the solution we land on, we need to ensure that regulation is accompanied by both effective education and proper support and guidance. That is why the Department for Education will publish a separate call for evidence on screentime guidance for 5-16-year-olds, to supplement the guidance already in development for the under 5s. It is also why we are consulting on ways to make it easier to parent in a digital age and on the steps we can take to equip children with the skills and knowledge they will need for their future. This forms part of our wider efforts to ensure that distraction, including ‘phone noise’ does not negatively impact on children’s learning and concentration.
This consultation will run for 3 months, following which the government has committed to taking swift action to deliver on its conclusions. This is why on 16 February 2026, the Prime Minister announced the government would be taking new legal powers to lay the groundwork for immediate action following this consultation. We are committed to following the evidence, and these powers will mean we can act fast on its findings rather than waiting years for new primary legislation.
We welcome contributions from everyone, including from children and young people, whose voices must be heard in this debate, through the dedicated children’s version of this consultation and our direct conversation and engagement with children. A similar survey will also be published for parents. In parallel we will continue to assess the developing evidence base, drawing on the expertise of an academic panel as well as international experiences.
At the heart of our work on online safety is a simple ambition: to make sure that our children’s lives online are as safe and healthy as they are offline, while equipping them with the confidence, skills and support to thrive in a world transformed by rapid technological change.
Chapter 1: Understanding how children use technology
The rapid growth of social media has been a phenomenon over the last decade, and its use is now an intrinsic part of people’s lives, including children. Understanding how and when children use digital technology, is increasingly important.
In this consultation, we will ask what parents, carers, teachers and those that work with children and young people, see as the key benefits and concerns as children spend more time in digital spaces. We want to deepen our understanding of how children engage with digital technologies in their daily lives to inform future government work in this space and so parents and carers are better informed and supported with insights that reflect real patterns of use.
Online activities and communities can enrich children’s learning. For many children, programmes like Numberblocks, a high quality TV and games series used both online and in classrooms, provides their first introduction to early maths, engaging their curiosity and laying essential foundational skills. Many young people will continue to build these skills through online play during their childhoods. For example, studies have shown measurable links between structured use of Minecraft and aspects of STEM learning that apply more broadly to games like Rapid Router, a UK platform that introduces children to programming concepts.[footnote 2]
Key stakeholders involved in online safety recognise the importance of technology and gaming for young people. For example, the Breck Foundation, which works to ensure that young people can enjoy digital spaces safely while benefiting from the positive potential of tech, notes that ‘technology and gaming can be powerful forces for good. They support education, creativity, social connection, and skills development’.[footnote 3]
Children report using online platforms to express themselves creatively, sharing photos, art, writing and videos, to explore new interests or to join new communities that support their hobbies and skills. For example, some children design games or interactive projects using tools such as Scratch or make and design comics on Pixton. Others share photography, music performances, book reviews, or educational videos to help their peers learn or to entertain them. Through these activities, children develop creativity, confidence, and valuable digital skills while connecting with others who share their interests.
Box 1.1.Example: LEGO® Play is a free, digital platform designed for children to create, explore, and play. The app includes features like a 3D brick builder, a creative canvas, personalised Minifigure avatars, stop-motion video creation, single player games, and videos produced by the LEGO Group all designed to inspire kids as they play.
The LEGO Group is committed to designing digital experiences that allow children to play and connect in a safe environment with deliberate design choices that remove harms common to social media experiences. LEGO® Play lets children create and share their own content, which is reviewed by its moderation service before it goes live. Using Verified Parental Consent, parents set permissions for what their child can do in the app, from sharing creations to writing comments. LEGO® Play has no direct messaging, no commercial or third-party advertising and no in-app purchases, with personal information only gathered at sign up for account management and provides a safe social experience for all.
Alongside these fundamental safety and security features, LEGO® Play uses intentional and age-appropriate design practices to constantly improve the experience in ways that actively promote learning, creativity and wellbeing outcomes.
Social media is a significant element of where children spend their time online and many children believe that social media adds real value to their lives. The evidence base is still evolving, but we know that by the time they reach secondary school, children’s use of social media is almost universal. Despite most platforms setting a minimum age of 13, over 81% of 10–12-year-olds say they use at least one social media app or site.[footnote 4] As children grow older, their use increases, and by mid-teens social media use is firmly embedded in their daily lives. It can help children stay connected with friends and family, maintain relationships when they can’t meet in person and find spaces where they feel understood.
Examples of children using online services to play, be creative and connect with others include:
- Posting artwork or digital designs on Instagram
- Sharing short dance or music clips on TikTok
- Uploading gaming content or DIY crafts on YouTube
- Video chatting with grandparents using WhatsApp
- Recording original songs and sharing them on SoundCloud
- Designing custom worlds in Minecraft and posting walkthroughs
- Learning languages through children’s versions of apps like Duolingo ABC
The government wants to support children to use digital spaces positively, through building their confidence, learning, and feeling part of something bigger.
But we also know that some patterns of use can have consequences and impacts on children. Research shows over 60% of 8-14-year-olds used their smartphone, tablet and/or computer between 11pm and 5am at least once over a 4-week period.[footnote 5] These habits are associated with significant sleep loss over time which is explored in more detail in Chapter 2.[footnote 6] There is also a question about what time spent online might be displacing for children – be that reading, physical activity or socialising in person – and how this might impact on young people’s growth and development. These are critical parts of health, happy childhoods where every child achieves and thrives.
We’ve also heard concerns that certain features and functionalities within platforms can exacerbate harm for children. The Online Safety Act already protects children from illegal and harmful content, and calls for services to take a safety-by-design approach to protect children from this content. However, stakeholders have told us there are still areas where the design of services – via its features and functionalities - can amplify risk and have therefore called for restrictions. Some features themselves make it harder for platforms to limit children’s exposure to harmful material, for example, real-time content that cannot be easily filtered, or recommendations systems that surface misleading or divisive content. In addition, we have heard concerns about business models that are built around driving engagement, which can encourage excessive use, something that is particularly worrying for developing brains – affecting sleep, focus and displacing other important activities. This is further discussed in Chapter 2.
Our aim, through this consultation, including the children’s and parents’ version, alongside our national conversation is to seek views on how to strike the balance to ensure children have the fulfilling and enriching digital lives that they deserve.
We will ask parents and children about how children use technology, online services and social media. We will also ask about the interventions they would find most helpful and effective to protect children online better, as well as what further measures could help parents on managing screentime and boosting media literacy.
Consultation questions (1 to 3)
1. What are the benefits of social media use, and being online, for children?
2. What are the harms or risks of social media use, and being online, for children?
3. Do you think the benefits of children using social media, and being online, outweigh the risks, or the other way around?
a. Benefits strongly outweigh the risks
b. Benefits somewhat outweigh the risks
c. Benefits and risks are roughly equal
d. Risks somewhat outweigh the benefits
e. Risks strongly outweigh the benefits
f. Don’t know / Prefer not to answer
Chapter 2: Interventions for safer, more positive experiences
Supporting children’s wellbeing online is not a one-size-fits-all endeavour. The features, content and safeguards appropriate for younger children will differ from those that teenagers need. As children grow up, the balance between benefit and risk of time spent online becomes more nuanced. As they grow older, teenagers gain more independence in all aspects of their lives and their engagement with technology should reflect that. This chapter considers a range of interventions aimed at supporting children to have safe and enriching online experiences. Some of which are mutually exclusive, some of which could work alongside each other. Across all of them we are seeking views on the appropriate age were they to be applied.
Across the interventions discussed in this chapter there is also the question of who they should apply to. There is no straightforward definition of ‘social media’ and many of the things children, parents and carers are concerned about also appear on other services including, but not limited to, gaming, chatbots and messaging services. At the same time, we want regulation to be proportionate and to be effectively targeted to achieve the right balance.
We will need to decide which providers are in scope of any new rules, and how to keep this list clear and up to date. We are seeking views on how broadly these interventions might apply, criteria and definitions that should be used if any of these options are taken forward and any exemptions that might be appropriate. The evidence we receive will inform how online services, including but not necessarily limited to social media, might be restricted to build our children’s wellbeing.
We will consider the possible impacts of any potential restrictions on vulnerable children, including those with special educational needs and disabilities (SEND). We will also consider how any transitional arrangements might work in relation to new restrictions, for example for existing accounts.
Restricting social media services by age
There is no current minimum age for accessing social media set in law. While the protections of the Online Safety Act apply to all children on user-to-user and search services, the Act only mandates ‘consistent’ enforcement of services’ own minimum ages and highly effective age assurance is not mandated.
However, the Online Safety Act already requires platforms that allow pornography, self-harm, suicide or eating disorder content to use highly effective age assurance to prevent any child under 18 from accessing these types of content.
Most social media services that set a minimum age do so at 13. This aligns with the age of digital consent – the age at which a child can consent to the processing of their data. The digital age of consent and its role in supporting protections for children online is discussed later in this chapter.
Even though many of the most popular social media services set a minimum age of 13, there is extensive evidence that much younger children regularly use them. Ofcom’s 2025 Parents and Children: Media Use and Attitudes Report found that 81% of 10–12-year-olds use at least one social media app or site and 86% have their own account on a social media, messaging or video sharing platform.[footnote 7] We consider how enforcement mechanisms could be improved in Chapter 3 but legislation mandating a minimum age of at least 13 to use social media, alongside a requirement for this to be effectively enforced, would be one way to address this. This would equate to a ban for anyone younger than the minimum age. It’s important to note, that this may mean adults being required to age assure to access these services, as a means to ensure effective enforcement of minimum age limits (more on this at Chapter 3), which could create trade-offs to user privacy, and create greater friction with adult users’ online experiences.
However, depending on how it was defined, introducing a minimum age limit for social media carries a risk of unintended consequences – even if it were set at 13. There are some online services which resemble social media but which are specifically designed for younger children, with features and user-to-user interactions subject to careful moderation. This approach can, when used as part of a balanced set of activities, provide benefits to children. For example, Internet Matters has found that for children between 6-10 years old, children’s use of digital technology can boost language skills, social development and creativity in children.[footnote 8]
Box 2.1. Example: Children’s use of digital technology can significantly boost their language skills in creative and engaging ways. By watching educational videos on platforms like YouTube Kids, children are exposed to new vocabulary, correct pronunciation, and varied sentence structures. Interactive tools such as Scratch encourage them to write dialogue and instructions, strengthening their reading and writing skills.
There are growing concerns among parents, carers and civil society organisations about older children too. Over 180,000 families have signed the Smartphone Free Childhood Parent Pact to delay getting their child a smartphone until at least 14, and social media until 16.[footnote 9] In only 2 years this grassroots organisation has evolved into a national movement, underscoring how deeply parents across the UK are worried about technology’s addictive nature and the need to safeguard children. Some teenagers themselves have said that they worry about the impact of spending too much time online and feeling like they aren’t able to control their own screentime. Additionally, 2025 research from Ofcom shows that a third (33%) of 8–17-year-olds say they have seen something online that they found ‘worrying or nasty’ in the past 12 months. This is unchanged since 2023.[footnote 10]
The momentum behind a ban is understandable: it provides a simple, easily communicated approach for parents, children, teachers, those that work with children and young people, and industry. Parents in Australia have been reported to be widely supportive of a ban, agreeing that it would help protect the mental health and wellbeing of children.[footnote 11] Teachers have also expressed strong support, with the Teacher’s Union (NASUWT) calling on government to tighten legislation so technology firms would be fined for allowing children under 16 to access their platforms.
We understand that parents want action to support their children’s wellbeing now and a ban appears to be a simple way to deliver it. That said, civil society organisations, including the NSPCC, the Molly Rose Foundation and the Internet Watch Foundation have raised concerns that this might not be the right approach and risks doing more harm than good. Their concerns include creating a ‘cliff-edge’ for older teens when they do come online and driving children to use riskier platforms. Additionally, there is concern that introducing a ban for under 16s could make children less likely to seek the help of adults if they do see something worrying online, particularly if they have taken measures to circumvent a minimum age limit.
These organisations also highlight the important role that social media can play in providing sources of support and guidance for children, especially for vulnerable children or those who may have more limited access to trusted and engaged adults. It is right that we ask questions about a minimum age, for example of 13 or 16, allowing policy decisions to flow from the evidence we gather. This evidence-based approach to the evaluation of any potential ban is supported by key stakeholders, including the Breck Foundation.[footnote 12]
These responses will be considered together with the answers to questions about which sorts of services restrictions should apply to, which is covered in more detail later in this chapter.
The debate about introducing a minimum age for social media has raised questions about the potential implications for parents and carers if children circumvent new rules. The government is clear: no parent, carer or child will be fined or punished if these safeguards are circumvented by children.
Consultation questions (4 to 7)
4. Would you support a legal requirement for social media services to have a minimum age of access?
a. Yes
b. No
c. Don’t know/Prefer not to answer
5. To what extent do you agree or disagree with the following statement: “Social media services should have a minimum age of access of at least 16 and should not be accessible to any children under that age”
a. Strongly agree
b. Somewhat agree
c. Neither agree nor disagree
d. Somewhat disagree
e. Strongly disagree
f. Don’t know/ Prefer not to answer
6. Would you support a legal requirement for social media services to have a minimum age of access lower than 16? If so, at what age would you set it?
a. Yes – 13
b. Yes – 14
c. Yes – 15
d. No – not lower than 16
e. Other (please specify)
f. Don’t know/ Prefer not to answer
Please explain the reasoning behind your answers about minimum age requirements.
7. What do you think the impacts would be of having a minimum age requirement higher than 13 for social media services? For example, impacts on the safety and wellbeing of children, or the impact for parents and carers, as well as other users. You could also comment on the impact on all users’ privacy and data or on business costs, revenue, and innovation.
Age of digital consent
Different online platforms may have different reasons for collecting personal data. For example, online retailers may need to collect personal data to remember what items have been placed in an online shopping basket or collect shipping details and credit card numbers to deliver items that have been purchased. Search engines and social media companies may collect behavioural data from their users to personalise and shape their online experiences, including to make recommendations or deliver targeted advertising. The UK Information Commissioner’s Office’s (ICO) Children’s Data Lives research shows that children frequently share data through apps and devices, entering personal information, posting updates, and interacting with algorithms.[footnote 13]
Under the UK’s General Data Protection Regulation (UK GDPR), whenever an online platform is using personal data, it must identify a legal ground for doing so. Depending on the purposes and the nature of the data collection, different legal grounds may be available. For example, some online services may rely on grounds of ‘legitimate interests’ or contractual necessity to deliver a core service that a customer has requested. However, for non-core services or optional extras, such as profiling a web user’s activity for the purposes of targeted advertising, the most appropriate legal ground is consent. ICO guidance on this issue recommends that online services give the child (and the parent where appropriate) as much choice over these elements of the service as possible.[footnote 14]
Under the UK GDPR, children can only give their own consent for their data to be processed by an ‘Information Society Service’[footnote 15] – such as a social media, online gaming or online retail platforms – once they are 13.
For children younger than the digital age of consent, organisations are required to make reasonable efforts to verify this consent from those with parental responsibility. Examples might include authentication through a registered credit card, or a referral to a parent’s email address asking them to authorise consent.
Changing the age of consent would not affect online services that do not rely on consent as their legal basis for processing. Although such services would still need to process children’s data in compliance with the general principles of data protection law, they would not automatically be required to seek parental authority to do so in respect of children under the age of consent. However, for services that currently rely on consent as their basis of processing, it could provide another intervention point that gives parents and carers more control over how platforms are using their personal data.
There is an option to increase this age from 13. Research shows that children’s understanding of how their personal data is collected, tracked, profiled and monetised develops gradually across adolescence, and that there is no clear developmental milestone at age 13 that would justify the current threshold.[footnote 16] Adding an extra parental check for children between the ages of 13 and 15 could reduce the risk of personal data being used in ways children in that age group might not understand.
Across Europe, several countries have already set higher thresholds: 14 in Austria, Italy and Spain; 15 in France; and 16 in Germany and Ireland. In those countries, social media platforms and other online services that rely on consent to process personal data, will require authorisation from someone with parental responsibility before they can process the data of children below the specified age.
However, once consent is granted either by a child who is at the specified age or by a parent or carer in respect of an underage child, the online platform is free to process the child’s data. The platform would still have obligations to process the personal data fairly, lawfully and transparently in accordance with the data protection legislation, but in practice it may not change the way some features and functionalities work, and some services would still seek to monetise the child’s data through personalised algorithms. The options to make changes to these sorts of online environments are set out in the section below on features and functionalities.
Additionally, research also shows that raising the age does not necessarily stop teenagers under the age of consent from using social networks.[footnote 17] Currently, many services often default to self-certification mechanisms like tick boxes, that don’t reliably check age or involve parents or carers.[footnote 18] Changing the age of digital consent may only be effective if organisations ensure their verification processes are as robust and meaningful as possible when a child grants their own consent or when parental authority is sought in respect of underage children. This relates to the issues set out in Chapter 3 on age assurance.
Parents and carers may have different attitudes about how much personal data of their children online services should be permitted to collect. They may also vary in their levels of digital literacy and ability to engage with parental verification requests. This means that children of parents or carers with less confidence or capacity to engage with digital processes may have reduced access in practice, potentially widening existing digital inequalities. This could lead to disparities in the online experiences of different groups of children.
If the UK raised the digital age of consent, it could also limit access to useful services and tools – such as educational technology – that rely on consent to process children’s data, unless some services were exempted. Any change would need to balance stronger protection for children whilst ensuring they can still use beneficial online resources.
This consultation aims to understand the level of demand for raising the digital age of consent, the potential impacts of changing this level of consent and how we should ensure any changes are effective and workable.
Consultation questions (8 to 11)
8. At what age do you think the age of digital consent in the UK should be set for information society services?
a. 13
b. 14
c. 15
d. 16
e. Other (please specify)
f. Don’t know / Prefer not to answer
9. What risks or burdens may be associated with raising the minimum age of digital consent? For example, ensuring parental consent, costs to industry, access to services, volume of requests, etc.
10. What should be considered to make raising the digital age of consent effective and workable? For example, suitable approaches to verify users’ ages (including where parental consent is required) or suitable approaches to verify a parent or carer’s identity, age and relationship to the child.
11. To what extent do you agree or disagree with the following statement: “There is a case for changing the digital age of consent for some online services but not others”
a. Strongly agree
b. Somewhat agree
c. Neither agree nor disagree
d. Somewhat disagree
e. Strongly disagree
f. Don’t know/ Prefer not to answer
Please explain the reasoning behind your answer.
Restricting access to services based on features and functionalities
Some functionalities and features increase the risk of children being exposed to harmful or inappropriate content. This might be because they are designed for the content to be seen in real-time or on a temporary basis, making it much harder for services to provide protections for children. Examples include livestreaming or disappearing messages. Another way of restricting social media for certain ages would be to identify features and functionalities which are inappropriate or present such risk to children that they outweigh any benefits of use.
Data protection law requires organisations that use personal information, including that of children, to do so according to a number of principles – including to use information only in fair, lawful and transparent ways. The ICO’s Age-Appropriate Design code sets out how online services should provide those services and use children’s data in a way which is age appropriate, including ensuring privacy settings for children are set to high privacy by default.
The Online Safety Act already requires online services to prevent children from seeing illegal content, and the most harmful types of content – this includes pornography and content that promotes, encourages or provides instructions for eating disorders, self-harm and suicide. Services are also required to provide children with an age-appropriate experience – this includes ensuring they put in age-appropriate protections for content including bullying, violence and dangerous challenges.
As part of this, services must look at whether certain features on their platforms make it more likely that children could come across illegal or harmful content and mitigate the risk if so. For example, companies are now legally required to address algorithms that push harmful content, such as suicide or self-harm content, to children and to take steps to prevent unknown adults from contacting children. Further restrictions on features and functionalities are being proposed by Ofcom, for example the regulator has consulted on protections for children while live streaming.
It should of course be noted that there are benefits to some of these features and functionalities for children. For example, hosting a live art session, music performance, or storytelling via a live stream can encourage children to express themselves creatively. Additionally, disappearing messages could cause children to feel more relaxed sharing everyday updates or light conversations, like speaking face-to-face. This can promote natural, informal communication.
However, the risks may outweigh these benefits. Government is seeking views on whether any features or functionalities are so high-risk that, even with mitigations, they should be off-limits to children, either at all or below a certain age.
This could result, for example, in a set of services with functionalities suitable for children under 16 and a set of services with functionalities which are only accessible to children 16 and over. This would create an easier online world for parents and children to navigate. It would ensure much more transparency about the suitability of different services for different ages. This would deliver similar outcomes to the ‘age classification’ system that currently exists for other sectors, in a way that accounts for the dynamism and scale of online activity.
Some online services use end-to-end encryption. The Online Safety Act does not prohibit the use of end-to-end encryption. Services that use end-to-end encryption are already required under the Act to assess the risks their service poses to children and to take proportionate steps to mitigate those risks, including in private communications. We would expect the same principles to apply regarding the measures being consulted on in this consultation. Simply having an end-to-end encrypted service would not be an acceptable reason for a platform not to comply.
These features and functionalities might, for example, include:
- Livestreaming: NSPCC’s 2018 survey of 40,000 children aged 7–16 found that 24% had livestreamed, and 6% of those received requests to remove or change their clothes.[footnote 19] The Internet Watch Foundation (IWF) similarly reported engaging with more than 2,000 cases in which children were believed to have been groomed or coerced into livestreaming illegal content – much of which appeared to have been recorded by offenders.[footnote 20] Some services already voluntarily prevent users they know to be under a certain age from accessing these features, for example you must be at least 18 to host a live stream on TikTok.
- Ability to send and receive images and videos containing nudity: The creation, possession and sharing of indecent images of children is illegal under UK law. Over 92% of sextortion cases involved recording or sharing content on social media,[footnote 21] with teenage males aged 14-17 particularly at risk of this harm.[footnote 22] Many platforms already deploy technology to detect the sending and receiving of images containing suspected nudity. In practice, these tools are often used to blur or warn, rather than to block the transmission of illegal content. This detection can occur before a message is sent and does not require access to message content, including in end-to-end encrypted services. The consultation therefore seeks views on whether and how companies should be required to prevent children accessing services which allow the transmission of nude images, rather than relying on voluntary or partial measures.
- Location sharing:Many children, parents and carers find these features useful on certain services, allowing children to grow in their independence whilst still enabling parents and carers to check on their whereabouts. However, on other services they can help facilitate illegal harms, including grooming, harassment, stalking, and coercive or controlling behaviour. As information about a child’s whereabouts is personal data, services should be complying with the requirements of the UK’s data protection legislation. Under their Illegal Content Codes of Practice, Ofcom expects that services that allow users to interact, that assess themselves to be at a high-to-medium risk of grooming, should ensure children are unable to share their location.
- Stranger-pairing:Under the Online Safety Act, some platforms have requirements to turn-off functionalities by default where they enable children to speak or connect with strangers. This includes services, that allow users to interact or post publicly (user-to-user services) and have a high risk of grooming on their service. Ofcom identifies stranger-pairing features as high risk due to increased exposure to harms such as grooming, harassment, pornography, suicide and self-harm content, hate, and bullying. Evidence particularly highlights risks on gaming platforms, where pairing with unknown users is often normalised as part of gameplay.[footnote 23]
- Disappearing messages: Disappearing messages are a common feature across various social media and messaging services. While intended to promote privacy and informal communication, the temporary nature of these messages can make harmful content harder to evidence or report, and may increase opportunities for coercion, grooming, or peer pressure. Ofcom’s Illegal Content Codes require services with direct messaging to take into account additional risks posed by disappearing messages to reassure users that that there is no permanent record of the content they are sending. Ofcom’s Children’s Codes identify disappearing messages as an additional risk factor in children sharing violent content or bullying.
This list is not exhaustive, and we welcome suggestions for other features or functionalities where the risks to children, even with mitigations, outweigh the benefits, and on the appropriate age for any restrictions.
We welcome views and evidence through this consultation and national conversation on relevant best-practice industry examples, to help us identify where voluntary company innovation may drive change across the industry.
Consultation questions (12 to 15)
12. Some online services allow their users to engage with the following functionalities. Do you think these functionalities should be age restricted so that children below a certain age cannot engage with them? (Please select all that apply)
a. Live streaming
b. Ability to send nude images or videos
c. Disappearing content
d. Location sharing
e. Connecting or talking to strangers
f. None of the above
g. Other (please specify)
h. Don’t know/ Prefer not to answer
13. Based on your previous answers, please specify your preferred minimum age for each of the functionalities below:
a. Live streaming
b. Ability to send nude images or videos
c. Disappearing content
d. Location sharing
e. Connecting or talking to strangers
f. None of the above
g. Other (please specify)
14. To what extent do you agree or disagree with the following statement: “Restricting children’s access to these features/ functionalities, would provide for a safer online experience for children”. Features/functionalities include live streaming, the ability to send nude images or videos, disappearing content, location sharing and connecting or talking to strangers.
a. Strongly agree
b. Somewhat agree
c. Neither agree nor disagree
d. Somewhat disagree
e. Strongly disagree
f. Don’t know/ Prefer not to answer
15. What do you think the impacts would be if some online services were required to introduce age restrictions on specific features and functionalities? For example, impacts on the safety and wellbeing of children, or the impact for parents and carers, as well as other users. You could also comment on the impact on all users’ privacy and data or on business costs, revenue, and innovation.
‘Addiction’, compulsive design and displacement
Research suggests that high levels of technology use can interfere with children’s ability to concentrate and engage effectively in school,[footnote 24] disrupt healthy sleep patterns,[footnote 25] and contribute to broader challenges around mood, behaviour and overall wellbeing.[footnote 26] Phones pose a distraction matter for children outside of school too, whether this applies to when young people are doing homework, over family dinners or spending time with friends. Restrictions on phone use in-school can be a good way of building healthy habits that apply more generally in children’s lives.
Ofcom’s Children’s Passive Online Measurement tool found on average, children aged 8-14 spend nearly 3 hours a day online across smartphone, tablet and computer.[footnote 27] An Ofcom report found that 99% of children aged 13-17 reported that they see at least one benefit to being online, such as helping them with school and homework or maintaining friendships.[footnote 28] The same report found that 33% of 8–17-year-olds agreed that their screentime was too high.[footnote 29] An Ofcom report found that 4 in ten (39%) parents agreed that they find it hard to control their child’s screentime, increasing as the child gets to teenage years.[footnote 30] The National Youth Strategy summary report states that young people need support for both physical and mental health and that they have identified feeling ‘exhausted’ by the digital world.
In this section of the consultation, we ask for views on how platform design features may encourage children to stay online for longer. We are also seeking views on the impact of these on children’s wellbeing, especially where they may contribute to the displacement of good mental health, concentration, sleep and self-esteem – all factors that we know are important for a healthy, balanced childhood.
Box 2.2. Evidence around screentime and the impact on children’s mental health
The evidence around screentime and the impact on children’s mental health is mixed, and no causal link has been found. In 2025, a consortium of academics from UK universities, led by Professor Amy Orben, conducted a review of scientific research to understand the impact of smartphones and social media on children and young people.[footnote 31] The review highlighted a lack of high quality causal evidence linking children’s mental health and wellbeing and their use of digital technologies, specifically, social media, smartphones and AI chatbots. However, the report also noted that this doesn’t mean a causal link doesn’t exist and highlighted that there is associative evidence between these issues. This is aligned with the findings of the UK Chief Medical Officers’ 2019 study which also found associations between children’s screen-based activities and negative effects such as increased risk of anxiety or depression.[footnote 32]
Taken in the round, these findings highlight the importance of ensuring that children’s online experiences are balanced and do not come at the expense of their physical, cognitive or emotional development. Ofcom’s Children Media Lives study, a longitudinal qualitative research project that follows a group of 21 children aged 8 to 18, highlights some of the negative feelings children in the sample can feel in relation to spending extending periods of time on devices.[footnote 33] This includes difficulties paying attention and feelings of overstimulation. The UK Chief Medical Officers’ suggested that screen use can displace healthy activities, such as sleep.[footnote 34] Adequate sleep can enhance memory, improve school performance and boost moods. For teenagers and younger children, sleep is essential for development. If children are on social media or using screens at night, instead of sleeping, this could be having a negative impact on their development.
Technological features and mechanisms can be especially powerful in shaping young users’ behaviour, sustaining long, repetitive patterns of use. Ofcom research suggests that these include:
- Infinite scrolling: Infinite scrolling means new content keeps loading automatically as you scroll down, so the page never really ends.
- Autoplay features: This is a system feature that automatically starts the next piece of content without requiring a user action, once the current content finishes or when it appears on screen.
- Affirmation functions: These are designed to give users positive feedback, validation, or encouragement. These include likes and reactions, comments & replies, shares and reposts, follower counts and subscribers, stories and reactions, streaks, achievements and personalised feedback (e.g. “people are loving your post”).
- Alerts and push notifications:An alert is a message that pops up inside an app or website to get your attention right away. A push notification is message sent to your phone or device by an app, even when you’re not using it, to update you or bring you back to the app.[footnote 35]
Additionally, personalised algorithms may also play a role in causing children to spend longer online. Content recommendation systems play an important role in social media, along with many other services. They are not inherently ‘addictive’ and can be used to make online experiences more positive and engaging.
However, as the experience of Molly Russell so devastatingly demonstrated, algorithms can also be a force for immense harm when they serve children the wrong kind of content, in many cases when they are not proactively seeking it out, or where they drive compulsive use. As the DSIT Select Committee noted in their recent report, ‘Social media, misinformation and harmful algorithms’, thinking about these questions through the lens of the business model can be helpful.[footnote 36] One way of addressing this might be to age restrict services that have personalised algorithms, or to address business models which are dependent on keeping children online for longer, for example by restricting the ability of services to advertise to children or age-restricting features and functionalities that can allow children to make in-service purchases, such as shops or loot-boxes. Parents who are doing their best to support their children to have a balanced relationship with technology may feel overwhelmed by the sophistication of the features and personalisation designed to keep users – including children and young people – engaged.
Not all of these functions are inherently harmful but this section of the consultation aims to understand what more can be done to protect teenagers from the functions online platforms use to keep their attention for longer, including:
- Restricting access to specific functions for children aged 13 – 15: This would cover the features or functionalities identified by Ofcom and be kept up to date based on ongoing research. If it is not technically feasible for platforms to remove access to certain functionalities, they may instead choose to bar children under a certain age from using their platform entirely.
- Limits on the time children can spend on certain services with these features – “curfews”:Measures here would limit the time spent on social media, set daily screentime limits for individual app usage and/or include nighttime curfews. This would build on measures that some services such as TikTok or Instagram have in place at a voluntary level but make them mandatory and prevent children from ignoring or dismissing screentime limits. Instagram already allows parents to set specific hours during which their child cannot use the platform, as well as allowing parents to make daily time limits mandatory, preventing teens from dismissing these if supervision of their account is enabled.
- Nudge techniques to discourage excessive use: Platforms can voluntarily use nudge techniques to prompt children to take breaks. A recent Danish study suggested promising results – see Box 2.3.
Box 2.3. Evidence box: A 2025 study by the Danish Competition and Consumer Authority, examined the reduction in time spent online across TikTok, Snapchat and Instagram for 269 teenagers aged 13-17, using ‘nudge’ techniques.[footnote 37] These techniques included forcing a 6 second pause before accessing a platform and asking children to indicate how much time the planned to be on a platform whenever they accessed it. These techniques caused a 31% – 36% drop in usage - equivalent to around an hour a day less for a child who spent 3 hours a day on social media before the study started.
Some voluntary approaches to limiting the time children and young people spend online have been deployed by services on a voluntary basis. This includes on YouTube, where parents’ and carers’ can control how much time their children spend on YouTube Shorts, including setting the timer to zero. Parents can also set a custom bedtime and take-a-break reminders. The consultation will consider whether these sorts of features should be more widespread across industry.
Consultation questions (16 to 20)
16. The following design features are sometimes known as ‘persuasive’, meaning they may encourage children to stay online for longer. From the following list, please select the ones you think are particularly ‘persuasive’ to children: (Please select all that apply).
a. Infinite scrolling
b. Autoplay
c. Affirmation features (e.g. ‘likes’, comments)
d. Alerts and push notifications
e. Content recommendation algorithms (these are algorithms which provide personalised recommendations on a user’s feed)
f. None of the above
g. Don’t know/ Prefer not to answer
h. Other (please specify)
17. Which of these features do you think should be age restricted? (Please select all that apply)
a. Infinite scrolling
b. Autoplay
c. Affirmation features (e.g. ‘likes’, comments)
d. Alerts and push notifications
e. Content recommendation algorithms (these are algorithms which provide personalised recommendations on a user’s feed)
f. None of the above – they should not be age restricted
g. Don’t know / Prefer not to answer
h. Other (please specify)
18. Based on your previous answers, please specify your preferred minimum age for each of the features below:
a. Infinite scrolling
b. Autoplay
c. Affirmation features (e.g. ‘likes’, comments)
d. Alerts and push notifications
e. Content recommendation algorithms (these are algorithms which provide personalised recommendations on a user’s feed)
f. None of the above – they should not be age restricted
g. Other (please specify)
19. Would you support the following restrictions for children’s access to online services? (Please select one)
a. Daily screen time limits for individual apps
b. Restricting overnight access for individual apps
c. Both – Daily screen time limits and overnight access for individual apps
d. I would not support either of them
e. Don’t know/ Prefer not to answer
20. What do you think the impacts would be if online platforms were required to restrict specific features or functionalities, or to introduce time limits? For example, impacts on the safety and wellbeing of children, or the impact for parents and carers, as well as other users. You could also comment on the impact on all users’ privacy and data or on business costs, revenue, and innovation.
What type of services should restrictions apply to
Drawing the scope of application for any regulation will always involve trade-offs. For example, organisations such as the NSPCC and Molly Rose Foundation have raised concerns that if minimum age restrictions only apply to a small number of the most popular user-to-user services, this could risk displacing children into other online spaces, including less regulated ones.
The scope of application has been a prominent issue of debate around the introduction of restrictions in the Australian system - further explanation is in Box 2.5.
The public debate about age limits has focused on social media but there is no single definition of what this includes. The duties in the Online Safety Act apply to search services and services that allow users to post content online or to interact with each other (‘user-to-user’ services). This includes a range of websites, apps and other services including social media services, consumer file cloud storage and sharing sites, video-sharing platforms, online forums, dating services and online instant messaging services. The scope of the Online Safety Act reflects that children can experience harm across a broader range of services than we are looking at in this consultation and relies on risk assessments to identify the appropriate level of protections.
The Online Safety Act is already broad and when considering how to build these additional protections for a specific set of risks we will take a more targeted approach. Our aim is to ensure children are protected on services with the features and functions that are most likely to increase their risk of exposure to harmful content, or that have been designed in a way to keep them online for longer. This could include sites and apps that children might not view as being traditional ‘social media’, for example where they might play, create or share games. To target the right services, we will need to set a narrower definition of which ones should be captured. Our starting point is that this should be a sub-set of ‘user-to-user’ services with a number of conditions that determine whether they are in scope.
As we consider further restrictions, whether applied to whole services or to specific functions within platforms, we need to meet children where they are. We do not intend for these measures to apply to sites and platforms that are low-use or low-risk for children, for example e-commerce platforms or closed online games that already embed strong safety-by-design protections. This is why we will also be asking about which sorts of sites and platforms that fall under the Online Safety Act should be exempt from any additional new restrictions. However, we are clear that end-to-end encrypted services will not be exempted.
Of paramount importance is ensuring that the scope of any new changes is clear to parents, children, industry and the regulator. And given the rapidly evolving nature of technology and services, it will also be key to ensure that any scope is dynamic and future-proof, to allow for the inclusion of new and evolving services.
We are consulting on whether the definition should:
- align with the Australian definition
- include additional conditions based on functionality or purpose
- capture services such as some gaming sites and messaging services that might have very similar functionality and risk profiles as those sites traditionally thought of as ‘social media’
- include exemptions to avoid capturing low risk and educational services
In addition, we consider issues specific to AI chatbots and Virtual Private Networks (VPNs) later in the consultation.
Box 2.4. How the Australia ban works and who is in scope?
Since 10 December 2025, age-restricted social media platforms have been required to take ‘reasonable steps’ to prevent Australians under 16s from having accounts on their service. The restrictions apply to having an account, and Australians under 16 are not prevented from accessing content on these services in a logged-out state, where this is possible.
The Australian regulator, the eSafety Commissioner (eSafety), monitors compliance with the social media minimum age. Age-restricted social media platforms may face penalties of up to $49.5 million AUD if they don’t comply.
Generally, age restrictions apply to social media platforms that meet 4 specific conditions. This is when:
- the sole purpose, or a significant purpose, of the service is to enable online social interaction between 2 or more end-users
- the service allows end-users to link to, or interact with, other end-users
- the service allows end-users to post material on the service, and
- the service is not an excluded service under legislative rules.
Platforms that have the sole or primary purpose of enabling messaging or online gaming are among a number of types of services that have been excluded under the legislative rules.
To promote compliance with the social media minimum age, eSafety has recommended that services self-assess whether they meet the conditions. eSafety has published a list of 10 services it has formally assessed to be age-restricted social media platforms, and 10 others considered to not be in scope. Given the number of potential services in scope, eSafety has not published an exhaustive list of all services which are in scope or exempt. Ultimately, any determination of whether a service is or is not an age-restricted social media platform is a matter for the Australian courts.
Social media platforms need to use age assurance methods to comply with Australia’s social media minimum age obligation. eSafety issued regulatory guidance to help platforms understand what ‘reasonable steps’ may constitute for their service. This guidance is principles-based, rather than prescribing specific types of age assurance to be employed. However, the guidance is clear that self-attestation (also known as ‘age-gating’) will not meet the legal threshold, nor will age assurance methods that rely on disproportionate amounts of information being submitted by users.
Consultation questions (21 to 25)
21. What factors are important when determining which apps, sites or services to apply minimum age of access restrictions to? For example, user-to-user interaction, the ability to post material, persuasive design features, risky functionalities, the ability to generate non-text mediums such as video or images, the target age group, the size of the service.
22. Are there any types of apps, sites or services that you would want to be captured by minimum age of access restrictions?
23. What factors are important when determining which apps, sites or services to apply age-restrictions on specific features and functionalities? For example, user-to-user interaction, the ability to post material, persuasive design features, risky functionalities, the ability to generate non-text mediums such as video or images, the target age group, the size of the service.
24. Are there any types of apps, sites or services that you want to be captured by age-restrictions to features/ functionalities?
25. Some services are already exempt from the Online Safety Act. Examples include internal business services, services with limited functionalities and services provided by persons providing education or childcare. Are there additional types of service which you think would be appropriate to exempt from age restrictions? These might include services whose primary purpose is delivery of educational content, services that offer specific child or teen accounts or versions, or services which offer parental controls.
Chatbots and AI
Generative AI offers opportunities to improve children’s lives and their education by supporting personalised learning, helping every child develop the knowledge and skills they need and reducing administrative burdens for teachers. AI can also help to drive children’s creativity and imagination, for example helping to co-create stories or inventing characters or worlds. Children might use them as advanced search engines, researching hobbies or skills, particularly on niche subjects.
Box 2.5: The Department for Education recently announced a programme to develop AI tutoring tools to help drive learning and support the most disadvantaged. New Product Safety Standards for education set clear expectations for safe design including: no anthropomorphisation (such as first person or emotion simulating language); no manipulative or persuasive design; no dark patterns; and robust mental health and safeguarding protocols that avoid isolation or secrecy encouraging language. These protections aim to ensure children’s interactions with AI are healthy, transparent and developmentally accurate.
At the same time, evidence on the risks and challenges of children using generative AI is still emerging. It is essential to ensure that AI tools are safe, reliable and appropriate for children.
There is growing evidence that UK children are already using AI in their daily life. Nominet polling in 2026 found that 58% of young people (aged 8-17) say that AI makes their lives better, 12% disagree, and almost half (48%) say AI is an important part of their everyday life.[footnote 38] Barry Laker, head of the Childline service at the NSPCC, said: “AI is becoming a regular part of children’s online experiences. When used safely and responsibly, it can offer opportunities, but it also brings risks, especially if children aren’t sure what’s real, or how AI works”. An event jointly hosted by DSIT and the NSPCC in April will explore further, including with children, the question of the impact of AI on childhood and how it can be harnessed as a positive force.
An AI chatbot is an artificial intelligence system with a natural language interface that can replicate human-like conversations with users. They can generate images, videos, text, audio in response to ‘prompts’ from users in real time. They may also remember context and personalise responses to individual users. These services can draw from a variety of datasets including live search results and large language models. AI chatbots can be standalone services or embedded within another service.
As chatbots become more embedded in children’s lives, it is vital to understand these risks and ensure their interactions remain healthy and developmentally appropriate. Internet Matters recently found that 64% of 9–17-year-olds use AI chatbots. Of those, 23% have turned to chatbots for advice, and 1 in 8 (12%) say they use them because they have no one else to talk to.[footnote 39]
The Online Safety Act provides protection for children from harmful or illegal content on chatbots. Chatbots are in scope of the Act if they either (a) enable user-to-user sharing or (b) search the live internet. Some AI chatbots only draw responses from an underlying model and don’t have sharing functionalities – these are out of scope of the Online Safety Act, except where they are able to generate pornographic content; in this case they would have to comply with the duties to use highly effective age assurance to secure that children are not normally able to access it.
The Secretary of State has been clear that she would take further action, including to protect children, where it is necessary. As announced by the Prime Minister on 16 February 2026, the government will introduce new powers so that chatbots not currently in scope of the Online Safety Act can be made subject to duties to protect all users from illegal content. This will ensure the Act keeps up to date with rapidly evolving harms.
But some risks stem from interaction with features rather than content. For example, emotional dependence is an emerging harm related to chatbots and one which children may be especially susceptible to. Research on US users suggests it is a significant and worsening risk: long sessions, persistent and anthropomorphic design can lead children to form parasocial attachments with chatbots, sometimes treating them as friends or intimate partners.[footnote 40]
Concerns and reports of children coming to serious harm have prompted some services to restrict functionality for children.
We want to understand which of the mitigation measures for children proposed in this chapter, for example minimum ages, restricting features and functionalities or time limits may be appropriate to apply to certain chatbots and, if so, how this subset of services is best defined.
Consultation questions (26 to 30)
26. What are the benefits to children of using AI chatbots? For example, this might include as a search function, for educational purposes, for creativity.
27. Which AI chatbot features are most risky for children? (Please select all that apply)
a. The realism of interactions, including realism of content generated
b. The personalisation of interactions
c. How they mimic relationships (friendship)
d. How they mimic relationships (romantic)
e. How they mimic empathy
f. Flattering language
g. Features to encourage more questions/ requests (e.g. asking questions back)
h. The ability to recall interactions across sessions
i. The type of content generated – a) video, b) text, c) audio, d) image
j. Allowing children to have accounts
k. Hallucination or false, misleading responses
l. Ability to engage in and generate mature content (e.g. sexual / romantic roleplay)
m. Other (please specify)
n. None of the above/AI chatbot features are not risky for children
o. Don’t know/ Prefer not to answer
28. Which functionalities of AI chatbots should minimum age restrictions apply to?
29. Should AI chatbots have minimum age restrictions?
a. Yes – minimum age requirements for AI chatbots
b. Yes – restrict access to certain features and functionalities
c. Yes – both minimum age requirements and restricting access to certain features and functionalities
d. No
e. Don’t know/ Prefer not to answer
Please explain the reasoning behind your answer
30. What do you think the impact would be of introducing age restrictions on AI chatbots or certain features and functions? For example, impacts on the safety and wellbeing of children, or the impact for parents and carers, as well as other users. You could also comment on the impact on all users’ privacy and data or on business costs, revenue, and innovation.
Chapter 3: Effective compliance and enforcement of online safety rules
Rules will only work if they are clear and properly enforced. We want to ensure that parents and carers have clear information about how enforcement measures work, helping them feel confident that the rules designed to protect children online are meaningful and robust.
Any approach that sets different online protections for children needs a reliable way to tell whether a user is a child. Age assurance methods must be accurate, trustworthy, easy to use and hard to bypass. This matters for the Online Safety Act and the UK’s data protection legislation, and for any of the new options for regulation discussed in the previous chapter. Any new approaches to enforcement will build on and complement existing approaches to keeping children safe online, including existing safety-by-design duties and the Protection of Children Codes of Practice. Improving enforcement technologies, including for age assurance, should build on the Online Safety Act’s requirements in this area, rather than be viewed as an alternative.
The ICO plays an active role in enforcing against platforms that set minimum age requirements but fail to prevent children under that age from accessing them and are therefore processing children’s data without a lawful basis. This includes fines of over £14 million against one major online service. The ICO are also monitoring the use of self-declaration as the primary method of age assurance on services that have a minimum age of 13, and promoting more robust methods, as part of its wider Children’s Strategy.
This chapter examines the current state of play in age assurance and seeks views on improvements that can be made especially those which might be necessary for any regime based on a specific age. It also poses questions about what type of age assurance technologies are likely to be the appropriate way of enforcing different types of age restrictions. It also looks at enforcement of any new regulatory requirements and the guidance to schools on the use of mobile phones.
Improving age assurance
Alongside putting the right rules in place, we must make sure they are enforced effectively. Where protections rely on age limits, services need the ability to check a user’s age.
While highly effective age assurance technologies provide the most comprehensive solution to checking users’ age, they also place demands on users. In the Online Safety Act, we have judged that this is appropriate for preventing children from accessing the most unsuitable platforms or content, like pornography and suicide content. It will be important to ensure that any age assurance necessary for future measures government takes is proportionate and effective. If we were to introduce age-graduated restrictions on different functionalities, other forms of age assurance may strike the best balance of user friendly, effective, proportionate and privacy preserving. The implications of a 13-year-old having screentime limits designed for 15-year-olds are different to the implications of 13-year-olds accessing porn sites, for instance.
We want to know how government can best support adoption of age assurance and technologies, including making them simpler to use for users, and ensuring families feel confident and safe with their children using them. This could include improving how age checks work across different services, for example making them more interoperable, or enabling device-level options, so that protections are simpler to apply and easier for parents and carers to trust.
Age assurance is already used at scale in the UK. Ofcom estimates that 7.8 million UK visitors per day access adult services with age assurance.[footnote 41] Current techniques are generally effective at distinguishing adults from under-18s.
Box 3.1. What is age assurance? Age assurance refers to both age verification and age estimation, as defined in section 230 of the Online Safety Act. For these purposes, self-declaration of age is not considered to be a form of age assurance
There are challenges to further age assurance adoption in UK. Fewer solutions are currently available to distinguish a 14 from a 16-year-old, although performance is improving. This consultation aims to establish how to ensure the technologies continue to develop and become more accurate.
This was an issue considered in depth by the year-long Australia’s Age Assurance Technology Trial, conducted by a UK company. The high-level results of this are set out in Box 3.2.
Box 3.2: The Australian Government funded UK-based company Age Check Certification Scheme (ACCS) to conduct the Age Assurance Technology Trial, to examine options to protect children from harm online, including on social media and from age-restricted content such as pornography. The headline finding from the trial is that, while there is no one-size-fits-all solution, age assurance can be done in Australia privately, efficiently and effectively. However, the trial also indicated some important nuance when using age assurance for younger users. For instance, age verification can be challenging for children, due to limited access to hard identification. Additionally, the report notes that the facial age estimation technologies that were tested are less accurate for younger users, and in particular have higher levels of uncertainty when determining the age of users close to the 16+ or 18+ age threshold. These findings highlight the circumstances where a more layered approach to age assurance might be required. That is, where a user may be asked to take additional steps to check their age – escalating the amount or type of information needed only when prior methods are not sufficient or are inconclusive.
There are also privacy concerns, particularly where some methods involve children sharing personal data. This is because users often must verify their age with multiple online services, via different third-party age assurance providers, as they move across the internet. Trust in the privacy practices of age assurance technologies is vital for ensuring people feel safe using these measures.
Finally, there is an issue of proportionality to consider. One way to achieve the strongest possible approach to any new age-linked restrictions would be to require every existing UK social media user to verify their age online, for example if we were to enact a ban on children from all social media. This was not the route taken in Australia – platforms are able to use a range of age assurance measures, such as age verification, estimation and inference through user data points. The Australian eSafety Commissioner’s regulatory guidance sets out the principles platforms should follow to prevent age-restricted users from having an account. The guidance states that the steps taken should be reliable, accurate, robust and effective; privacy-preserving and data-minimising; accessible, inclusive and fair; transparent; proportionate; evidence-based, and; responsive to emerging technology and risk.
The eSafety guidance also encourages platforms to use ‘successive validation’, or a layered approach to age assurance. This means using multiple age assurance methods in succession to provide more certainty over the age of a user by building a picture of the user’s age from a variety of different inputs. For instance, a social media platform could first use facial age estimation to assess the age of a user. If this comes back with a low confidence score for the user’s age, the platform could then offer an age verification method for the user, such as photo-ID matching. This use of successive validation recognises that no single age assurance method works perfectly for all users, in all contexts.
With regards to age assurance specifically, Ofcom’s guidance on highly effective age assurance includes the principle of robustness and requires services to take appropriate steps to mitigate against methods of circumvention that are easily accessible to children. Ofcom also has a statutory duty to publish a report on how age assurance is being used to comply with the Online Safety Act by July 2026. This is likely to provide an opportunity to assess the effectiveness of age assurance methods and any circumvention techniques or solutions that threaten to reduce their effectiveness.
We welcome views through this consultation and wider national conversation, on where international approaches set appropriate standards to follow.
Consultation questions (31 to 35)
31. To what extent do you agree with this statement: “Adults should complete age checks more often, if it means children are safer online”?
a. Strongly agree
b. Somewhat agree
c. Neither agree nor disagree
d. Somewhat disagree
e. Strongly disagree
f. Don’t know/ Prefer not to answer
32. What should be considered to make minimum age restrictions effective and workable? This could mean either age restrictions for access to whole services, or for specific risky or ‘addictive’ features or functionalities.
33. What do you think the impacts might be from requiring age assurance across a greater number of online platforms? For example, impacts on the safety and wellbeing of children, or the impact for parents and carers, as well as other users. You could also comment on the impact on all users’ privacy and data or on business costs, revenue, and innovation.
34. How, if at all, could age assurance be made more effective?
35. What should be considered when assessing the effectiveness of age-verification and age-assurance technologies?
Circumvention of age limits
In shaping our new approach, we must consider how to deal with circumvention methods. Many children are technologically savvy and are likely to be aware of possible technical and non-technical ways to bypass new rules or restrictions. We need to find solutions that are robust. At the same time, our approach must be proportionate, and it should not inadvertently restrict children’s access to beneficial online content, nor restrict the legitimate and lawful use by adults of tools such as Virtual Private Networks (VPNs), or similar private network technologies.
Box 3.3: What are VPNs? VPNs are tools that hide your internet activity and location by creating a secure private connection between your device and the internet. They are used for legitimate purposes, such as for remote work and business access, protecting sensitive communications and protecting privacy.
VPN usage more than doubled in the UK following highly effective age assurance requirements becoming mandatory, rising from about 650,000 daily users before 25 July 2025 and peaking at over 1.4 million users in mid-August 2025. This has gradually declined to around 9800,000 users by the end of 2025.[footnote 42]
Due to the nature of VPNs, it is impossible to know how many of these users were children. However, early evidence does not suggest the peak in VPN use was driven by children seeking to bypass age assurance processes.
There are many reasons that children may wish to use a VPN, but the main reason, as with adults, is to access the privacy and data protection benefits that VPNs offer. For example, VPNs can offer enhanced data protection when connected to public WiFi and can add an extra level of security when accessing online banking.
Box 3.4. Internet Matters found that 8% of children used a VPN in the past twelve months and 66% of those who do use it to protect their personal data.[footnote 43] 38% of child VPN-users surveyed by Childnet, reported using one to stay safe online and protect their privacy.[footnote 44]
There are other, non-technical ways that children may be able to circumvent online safety rules, for example using someone else’s account, switching or sharing devices with siblings and adults.
While it must be acknowledged that no approach is a silver bullet – no regulatory rule will stop a sibling sharing a device – this consultation is seeking views on how to ensure that existing and new online safety provisions for children are as strong as they can be, with a view to reducing circumvention rates. This is why we are asking about how children use VPNs, as well as how else age assurance can be circumvented, and what options exist to identify and prevent it. We also ask about the broader implications were government to restrict the use of VPNs for children.
Consultation questions (36 to 40)
36. What methods to circumvent online safety rules do you think children in the UK use, beyond Virtual Private Networks (VPNs), or similar technologies?
37. Which of the options below do you think the government should prioritise to reduce circumvention of online safety rules in the UK? (Please select the most important one to you)
a. More education for children
b. Restricting children’s access to VPNs
c. None of the above
d. Other (please specify)
e. Don’t know/ Prefer not to answer
38. To what extent do you agree or disagree with the following statement: “Everyone should go through age checks to access a VPN if it would prevent children using them”
a. Strongly agree
b. Somewhat agree
c. Neither agree nor disagree
d. Somewhat disagree
e. Strongly disagree
f. Don’t know/ Prefer not to answer
39. What do you think the impacts would be if VPNs were age-restricted? For example, impacts on the safety and wellbeing of children, or the impact for parents and carers, as well as other users. You could also comment on the impact on all users’ privacy and data or on business costs, revenue, and innovation.
40. What should be considered to make age-restricting VPNs effective and workable? For example, public trust and engagement with increased age assurance requirements, accessibility of age assurance methods and variations of age assurance approaches across services, interaction with legitimate uses of VPNs.
Enforcement of mobile phone policies in schools
There is increasing evidence that mobile phones impact the school environment by contributing to distraction and disruption in lessons.[footnote 45] Simply having a mobile phone can draw pupils’ attention away from learning, and, once distracted, pupils can take up to 20 minutes to re-focus.[footnote 46] Schools that restrict or ban mobile phone use often report a range of benefits, including improved concentration, reduced opportunities for bullying – both online and offline – and, in some cases, modest gains in academic performance.[footnote 47]
However, whilst the vast majority of schools have policies on mobile phones, they are still causing disruption in lessons. The Children’s Commissioner found that 99.8% of primary schools and 90% of secondary schools had a mobile phone policy in place that limited pupils’ use of their mobile phone during the school day.[footnote 48] Despite this, Department for Education data shows that 58% of secondary school pupils reported that mobile devices were used when they were not supposed to in at least some lessons.[footnote 49] NASUWT data shows that over 50% of secondary school teachers cite distraction of mobile phones (e.g. texting, cameras) as one of their most significant day-to-day behavioural concerns.[footnote 50]
To tackle this, we have taken action to make sure all schools are mobile phone-free environments by default and that all schools can enforce their policy. We have published strengthened guidance which is clear that all schools should be mobile phone-free environments. This means no mobile phones throughout the school day, including during lessons, the time between lessons, breaktimes and lunchtime. From April, Ofsted will also examine both schools’ mobile phone policies and how effectively they are implemented when judging behaviour during inspections. In addition, schools that need help can get one-to-one support from the Department for Education’s Attendance and Behaviour Hub lead schools, which are spread across all regions of the country and are already effectively implementing mobile phone-free approaches.
We are seeking views on if we need to go further to support schools to be mobile phone-free, including on whether we should introduce a new statutory obligation on schools to have regard to guidance on mobile phones in schools. This would mean schools have a legal duty to follow the guidance (unless they have good reasons not to).
Headteachers would continue to decide how they want to make their school mobile-phone free. This would continue to allow flexibility for children who may have additional needs, for example specific medical conditions, while also providing a new legal position for schools, parents and carers.
Consultation questions (41 to 43)
41. To what extent do you agree or disagree with the following statement: “To address some of the challenges schools face with mobile phones, the Department for Education’s (DfE) non-statutory guidance on ‘mobile phones in schools’ should be made statutory.”
This would mean schools have a legal duty to follow the guidance, which explains to individual schools and trusts how to implement a policy that prohibits the use of mobile phones throughout the school day, they have good reasons not to. This includes during lessons, the time between lessons, breaktimes and lunchtime.
a. Strongly agree
b. Somewhat agree
c. Neither agree nor disagree
d. Somewhat disagree
e. Strongly disagree
f. Don’t know/prefer not to answer
42. What impacts would there be if this guidance was made statutory and why? For example, on disruption in lessons, bullying or harassment, parental views on mobile phone policies, staff, etc.
43. Are there specific circumstances where you think children should be permitted to have or use a mobile phone during the school day? (Please select all that apply)
a. Medical needs
b. Special Educational Needs and Disabilities (SEND) requirements
c. Individual safeguarding concerns
d. Caring responsibilities
e. Educational or learning purposes
f. Travel to and from school
g. None of the above, children should not be permitted to have or use a mobile phone during the school day at all
h. None of the above, children should always be permitted to have or use a mobile phone during the school day
i. Don’t know/ Prefer not to answer
j. Other (please specify)
Chapter 4: Preparing children for a digital future and enriching their online experiences
As the National Youth Strategy sets out, today’s young people are the first generation to grow up fully in a digital world, which shapes every aspect of their lives.
A good childhood in a digital age must be one that includes children having positive, enriching experiences – online and offline – and one that prepares them to make the most of the opportunities of the digital future.
This chapter explores how children can best be supported to develop essential digital and media literacy skills, helping them to navigate digital spaces confidently and to build healthy habits. This will be especially critical to accompany any of the changes, explored in Chapter 3 to ensure that no child faces a ‘cliff edge’ or is left unprepared to handle the risks and challenges either when they do join social media or when they are adults.
We also explore what more can be done to ensure that content which is available to children is high quality and supports their growth and development.
Building essential digital skills and media literacy
Children use digital spaces to learn, socialise and explore the world. Good media and digital literacy help keep young people safe and helps them make sense of what they see. Teaching children early how to use the internet safely and confidently is essential to ensure they are informed and can enjoy the positive parts of being online: helping them make good choices, think carefully about information, and use technology in positive and creative ways. These skills will also help them in school and in future employment.
Box 4.1: Media literacy is about understanding, questioning, and making sense of the content you see online. It helps children tell the difference between fact and opinion, check sources and assess their trustworthiness, and recognise how online content can affect thoughts, feelings and behaviour.
Digital literacy means having the practical skills to use devices and online services safely, confidently and independently. This includes knowing how to set up and use devices, recognising scams, protecting personal information, and managing everyday digital tasks.
Children need different levels of support at different stages of their development. They also need support at the appropriate time, ensuring they are equipped to navigate and respond to appropriately to risks before they are exposed to them. Key moments include children receiving their first mobile phone, typically now by age 11, as well as the age at which they join social media. If minimum ages are to be introduced for social media or services with certain functionalities this must be factored into the timing and nature of support that children are given.
As set out in Box 4.1, digital and media literacy covers a broad and diverse set of competencies. For the purposes of improving children’s experiences online, our focus is on helping them develop the skills needed to recognise and respond appropriately to illegal and harmful content, as well as misleading and divisive material. In addition, we are prioritising how best to equip children to manage and resist excessive usage of online platforms for entertainment purposes. Each of these issues will need a specific approach tailored for age and content.
Promoting and improving digital and media literacy is a complex, long-term endeavour that requires a whole-of-society and whole of government effort. Our approach brings together the full range of levers across central departments, local authorities, regulators, and public institutions. While each organisation—whether government departments, councils, Ofcom, schools, universities, libraries, charities, news organisations or online platforms—has a distinct role, we believe their impact is greatest when they work in partnership, as illustrated by the examples below. By coordinating insights and actions across all partners, government can deliver stronger, more consistent, and more resilient media literacy outcomes. At its heart, this approach is based on the following principles: supporting people where they are, including through schools, in communities and online, making use of the full range of levers, partnerships, and delivery channels; and, sharing and using what works, across government, industry and regulators.
This section explores work already underway and seeks views on what more can be done to ensure that we help equip children with the skills and resilience they need to thrive in a digital world.
Ofcom’s media literacy strategy is anchored in the Communications Act 2003 with the duty to promote media literacy. The Online Safety Act 2023 (OSA) introduced some new media literacy objectives to focus on people impacted by harm including women and girls, protection of personal data, content of democratic importance and misinformation and disinformation. Ofcom’s strategy addresses those requirements, recognising there are a large number of actors already working in media literacy and maps out how Ofcom will work with them to understand and promote “what works” through research and evaluation. Ofcom also works with platforms to influence design and in communities to develop media literacy skills in priority groups in areas of financial disadvantage, including in over 150 primary and secondary schools in the UK.
In addition, under the Online Safety Act, Ofcom is required to publish a statement recommending ways in which services might develop, pursue and evaluate activities or initiatives relevant to media literacy. The Statement of Recommendations is designed to help a broad range of services (including online services and public service broadcasters) to take active steps to empower the public by giving them the skills and information needed to critically and safely engage with the content they see.
Supporting children at school
Schools have a critical role to play. The government has already updated the Relationships, Sex and Health Education guidance so pupils learn about online risks, including online misogyny, deepfakes and artificial intelligence as well as how to identify and challenge misinformation. We will build on this by adding more applied knowledge and skills into the national curriculum, including digital and media literacy. We will consult later this year changes to the national curriculum and we are committed to working with teachers, pupils, parents, carers and experts to decide the best way to support effective teaching of media and digital literacy.
Government programmes also support teachers and pupils directly. These include the Tech Youth programme that will help one million secondary pupils learn about technology, including AI, and explore future careers; support for the National Centre for Computing Education, boosting teacher confidence on digital and AI literacy; work with Connect Futures to creates media literacy training for teachers related to counterextremism; and, previous funding to the National Literacy Trust’s Empower programme which helps pupils engage confidently with news sources. The Department for Education published online support materials for educators and leaders in 2025 on the safe and effective use of AI, and these are currently being updated to include support staff.
This support to schools and teachers will continue to evolve. The upcoming Media Literacy Action Plan is discussed in more detail below. One of its key priorities will be strengthening the support available for children through schools, including helping teachers feel more confident in teaching students how to spot false information, understand social media and AI work and stay safe online. Government is also developing a digital skills pathway for educators to support the safe integration and use of technology at school.
Support them through community and other settings
While schools play an important role, children also need chances to build these skills outside the classroom. Community settings such as libraries and youth clubs can provide safe and familiar places for children to learn and practise these skills.
Between 2021 and 2024 the government provided almost £2 million in grant funding to community-based and third-sector organisations delivering media literacy projects. This investment has supported a range of initiatives in community settings focused on helping participants to build stronger critical-thinking skills, identify reliable information and understand how online content can affect their wellbeing.
In recent years, 13 organisations have been commissioned to deliver media literacy projects in community and local settings. These projects focused on people who were either disengaged from media literacy support or disproportionately vulnerable to online harm. These programmes are designed to improve participants’ critical-thinking skills, build resilience to mis- and disinformation, and help them understand the impact of online content on their wellbeing, with all projects subject to robust evaluation requirements.
This work, delivered with support from expert partners, has shown that fun, creative and hands on activities, such as games and practical tasks, work particularly well, especially when children help design them.[footnote 51]
We know that equipping children with high quality and cross-disciplinary digital education is crucial to ensuring that they are flourishing online and learning the skills they will need in tomorrow’s digital economy. This is why we are asking what more children need from education both in-school and in other settings. We want to know about what children should be learning and how best to teach them.
Industry and civil society initiatives
Industry also has an important role to play, and several social media sites and technology platforms have developed media literacy initiatives. This has significant potential when content is developed in the right way, with expert input, and disseminated in the right way to the right audiences.
Box 4.2. Working with Barnados and Parentzone, Google’s Be Internet Legends programme aims to empower younger children to use the web safety and wisely. The programme includes an online game that teaches children key lessons of internet safety through 4 fun challenges, a guide for families, a curriculum for teachers and a digital wellbeing packet to support educators. Primary schools can book internet safety and cyberbullying workshops that are delivered by Barnados.
By November 2024, Be Internet Legends had trained its 10 millionth UK primary-aged pupil. That’s almost 70% of a generation reached with essential online safety and media literacy skills since 2018.[footnote 52] It is also complemented by Be Internet Citizens for older teens.
Civil society organisations also reach large and diverse audiences, of both children and parents and carers: Internet Matters provides vulnerability-specific digital safety guidance for children with SEND and mental health needs, care-experienced children and those with communication difficulties through its Inclusive Digital Safety hub. Backed by DSIT funding, Parentzone delivers nationwide programmes building family digital skills, for example ‘Everyday Digital’ which reached over 63,000 parents and showed a 45% improvement in media literacy understanding.[footnote 53] The NSPCC strengthens the picture further by providing trusted safety advice and training to schools, parents, and professionals, acting as a major national source of help for families.
Campaigns can have significant reach when sectors work together. The DISMISS campaign, created by Shout Out UK, and supported by Ofcom and the Electoral Commission reached more than 9 million young people during the 2024 General Election[footnote 54] and global companies such as Meta and UNESCO run international campaigns that help young people think more carefully about what they see online. Ofcom is also undertaking a project to support young people, particularly first-time voters, to understand how to identify accurate information in advance of the elections in Scotland and Wales in 2026.
Supporting adults, including parents, carers and families
Parents, carers and families play a key role in helping children learn and it is vital that they feel confident to support and empower their children online.
We know that many adults want to support children’s online safety but often feel unsure, overwhelmed, or under‑equipped. Adults who support children need both practical tools and clear guidance to fulfil their role effectively. Government has an important enabling role here. Building on existing work, we want all adults with caring responsibilities to have easy access to clear, practical support.
That’s why we are developing a Media Literacy Action Plan that empowers parents, carers and families. This plan will bring together work across government to give adults clearer, more coordinated support, helping them build the skills and confidence to spot false information, strengthen digital literacy, and ensure the children in their care can participate safely online. Informed by expert advice, it will set out how the whole of government will work together to improve children’s online safety.
As part of the Action Plan, we will create a single, trusted hub of practical advice, backed by a national campaign. We will also boost local projects that help families who may be at greater risk, improve access to reliable information on topics like health, and work with platforms, public services and local organisations so families receive clear, consistent support wherever they seek help.
Through this consultation we want to know about the sort of support that parents and carers need to equip them for conversations with their children about staying safe online. This includes asking where parents and carers already go for help and the types of content that they find particularly challenging to discuss.
Consultation questions (44 to 48)
44. Which areas of media or digital literacy do children and families most need additional help with? (Please select all that apply)
a. Managing screen time and online habits
b. Spotting adverts, sponsored posts or AI generated content
c. Keeping personal information private
d. Online behaviour and experiences (bullying, respect, comparison or peer pressure)
e. Checking if information is true
f. Understanding how social media works (for example, ‘likes’ or algorithms)
g. Staying safe online (including how to have conversations about online safety)
h. Reporting harmful or upsetting content
i. Knowing which apps or sites are right for their age
j. None of the above
k. Don’t know/ Prefer not to answer
l. Other (please specify)
45. If you are responding as a private individual, where do you go for help with online safety or media literacy skills? By online safety or media literacy skills, we mean things like staying safe online, understanding digital content and using the internet confidently and responsibly. (Please select all that apply)
a. Schools or childcare settings
b. Community or youth spaces (for example libraries, youth clubs or local charities)
c. Parent or carer groups or networks
d. Public services (such as family hubs, GP surgeries or community centres)
e. Faith or cultural groups (including places of worship)
f. Non-governmental online sources (such as websites, platforms or online communities)
g. Government websites
h. Tools and resources on online platforms
i. None of the above/I haven’t used any of these to find help
j. Don’t know/ Prefer not to answer
k. Other (please specify)
What made these places helpful? Please share any programmes, resources or activities that you have found useful.
46. Where, if anywhere, would you like to see more support available in the future? This could include places you already use but don’t offer support and you would like them to, or places that could offer more support with help from government or others. (Please select all that apply)
a. Schools or childcare settings
b. Community or youth spaces (for example libraries, youth clubs or local charities)
c. Parent or carer groups or networks
d. Public services (such as family hubs, GP surgeries or community centres)
e. Faith or cultural groups
f. Non-governmental online sources (such as websites, platforms or online communities)
g. Government websites
h. None of the above/I would not use these to find help
i. Don’t know/ Prefer not to answer
j. Other (please specify)
47. Outside of schools, how could the UK government better support children and young people to stay safe and feel supported online? (Please select all that apply)
a. By providing clear guidance that children can use on their own
b. By supporting parents and carers to support children online
c. By working with platforms and services that children already use
d. By supporting youth organisations and community groups to help children online
e. By making help or advice easy to access when something goes wrong online
f. By involving children and young people in designing support
g. None of the above
h. Don’t know/ Prefer not to answer
48. What types of support would help children with additional needs stay safe online and build digital skills? By ‘additional needs’, we mean children who may need extra support for a range of reasons (such as learning, communication, health or access needs).(Please select all that apply)
a. Clear, simple information using plain language
b. Content adapted for different ages, abilities or needs
c. Visual, audio or interactive formats
d. Support delivered through trusted local or community services
e. Flexible or on-demand support that can be accessed when needed
f. Support that helps parents or carers guide children online
g. None of the above
h. Don’t know/ Prefer not to answer
i. Other (please specify)
Promoting access to high quality content
We know that access to high quality content can have positive impacts on children’s learning and development. High quality children’s TV and video content is an example of this. Whether through innovative storytelling that helps ignite children’s imaginations, or educational programming that helps them learn about the world around them, children can find enormous value in such content.
As in other areas, access to internet services has rapidly expanded the range of content available for viewing by children and parents. Video-on-demand services like iPlayer and Disney+ offer dedicated children’s libraries, while video-sharing platforms such as YouTube typically offer a wide range of user-generated content designed to appeal to children, including cartoons, music videos and educational programmes. They also enable children to access broader content including news, documentaries and drama, both on handheld devices and via traditional TV sets. The largest video-sharing platform, YouTube, is now the most watched service for children aged 4-15, accounting for 28% of video viewing for this age group.[footnote 55]
While video-sharing platforms were traditionally considered alternatives to broadcast television, they now also play an increasingly important role in the digital strategies of UK broadcasters, including our public service media providers – from uploading short-form, promotional videos to entire episodes and creating digital-first content. For example, the BBC recently announced its intention to expand its YouTube offering to bring trusted public service content to more young audiences.
Nevertheless, given that more children are now watching more content online and less on traditional formats such as broadcast television, there is a concern that children are less likely to see educational programmes or content that may help with emotional and mental wellbeing development. Ofcom’s recent Public Service Media Review highlighted the impact of video-sharing platforms on the UK’s media ecosystem and made several recommendations, including that the government should consider whether action is required to ensure that public service media content (including children’s content) is sufficiently prominent on services like YouTube.
To this end, the government has urged video-sharing platforms and public service broadcasters to increase the visibility of suitable children’s material. The Department for Culture, Media and Sport has set out the government’s commitment to supporting high quality children’s TV and video content that informs, educates and entertains. Written evidence to the Culture, Media and Sport Committee Inquiry on children’s TV and video content notes that in a rapidly-evolving media landscape, it is vital that all parts of the sector work together to meet audiences where they are, ensure high quality programming is easily accessible, and continue to create content that informs, educates, and entertains generations to come. This includes considering what more can be done to ensure younger audiences can easily find such content on video-sharing platforms.
Box 4.3. International example: The EU strategy for a Better Internet for Kids (BIK+) has developed best practice guidance on Positive Online Content for Children. This guidance defines positive content online as “content that gives young internet users access to high quality online experiences which can assist and empower them to become active and participatory citizens.” It aims to promote a better digital childhood and encourages the creation of new tools and services by providing positive examples of digital content for a variety of target groups.
The guidance includes a checklist for content providers to use when developing new content and services to ensure that their products are fit for purpose, and take measures to ensure that children can go online free from risk of harm, whether this be in terms of content, contact, conduct, or commercial considerations. Parents, carers and educators can also benefit from the checklist by being better aware of the features they should look out for when choosing online experiences for younger children.
The checklist includes:
Basics - this includes planning benefits to the child, defining objectives and deciding the target age-range.
Transparency - considering the age-range for design interface purposes, consider child development abilities and interests for each target age and any socio-cultural factors.
Stimulating experiences - creative, interactive, stimulating, educational products with appealing visuals, sounds and videos.
Usability - ensure easy usability across different formats
Accessibility and inclusion - ensure accessibility (for example, assistive technologies, alternative texts/attributes)
Reliability - ensure compliance with legislation or regulations and provide accurate and reliable content which is maintained and reviewed regularly.
Safety and privacy - safety-by-design, privacy of the child is paramount, sensitively developed social media elements and other communication features and responsible use of commercial elements.
We are seeking views on what type of online content can have a positive impact on children’s lives and what more we can do to incentivise and support positive online spaces for young people. We are not proposing mandatory requirements or regulatory changes. This is about what more can be done through cross-sector collaboration and voluntary efforts to highlight best practice and help children, parents and carers access more positive content.
Consultation questions (49 to 51)
49. Who would you trust to determine what is meant by ‘high quality online content’ for children 13-16? (Please select all that apply)
a. Government
b. Online platform trust and safety teams
c. Parents, carers or trusted adults
d. Children
e. Developmental experts
f. Educators
g. Youth workers
h. Child advocacy charities and organisations
i. None of the above
j. Don’t know/ Prefer not to answer
k. Other (please specify)
50. What further action should be prioritised to support positive online spaces for young people? (Please select all that apply)
a. Develop best practice principles for industry
b. Develop guidance for parents and carers
c. Develop guidance for children
d. Reviewing international approaches
e. Industry voluntarily promoting high quality content for children
f. None of the above
g. Don’t know/ Prefer not to answer
h. Other (please specify)
51. What should be considered when taking further action to support positive online spaces and content for young people? For example, how would this work in practice for services, taking into account existing best practice across industry, and who should feed into future guidance.
Chapter 5: Supporting families
Parents, carers, and other adults involved in children’s upbringing – including grandparents, extended family, kinship carers, and other trusted adults[footnote 56] – play key roles in shaping children’s lives. This is as true online as offline.
The government is clear that it is the responsibility of industry, not parents or carers, to make services safe and appropriate for children. This is the law and Chapter 3 sets out ways in which this might be strengthened. However, parents and carers play a key role in making decisions about what is right for their child, in setting boundaries and helping them to enjoy enriching online activities. This can be a complex and time-consuming world for parents and carers, especially those who did not grow up using social media themselves, to navigate.
This chapter asks what more can be done to support parents and carers, seeks views on what more can be done to provide guidance and resources on screentime and how best to engage with children about online content they are viewing.
Parental controls
Many parents and carers tell us they often feel underinformed, underconfident, or overwhelmed when trying to support children online. They frequently encounter a fast-changing digital environment, complex platform settings, and fragmented advice.
Box 5.1. Evidence box: Ofcom’s media usage and attitudes survey from 2025 found that a high proportion of parents are aware of online protection tools or controls, but many do not use them. Most parents (93%) of 3–17-year-olds are aware of at least one device-specific or network-based technical tool or control to protect their children when they go online. Actual use is notably lower, at around 76%.[footnote 57]
Industry already provides a range of useful parental control tools to allow parents and carers to oversee and place parameters around their children’s online activity, including content, time and functionality-based restrictions (examples include Google Family Link, YouTube Kids Mode), and we want to help adults feel confident to choose and use these effectively.
However, evidence shows that parents’ awareness of parental controls varies. Those who are aware do not necessarily understand all available options, and the complexity and fast-moving nature of online settings can make parents feel overwhelmed. In addition, parents who lack confidence in parental controls have reported being unsure of their children’s online activities.[footnote 58]
This consultation seeks to understand what role government can play in ensuring the controls and information for parents and carers is straightforward, easy to understand and accessible for all.
Screentime guidance for parents and carers
“Screentime” is usually defined as the time spent using a device with a digital screen such as a computer, mobile phone, television, or games console, and includes using screens for educational purposes and reading on a digital device.
We know that children’s engagement begins in early childhood. A majority of 3-to-5-year-olds go online (85%), may have their own mobile phone (17%), and over a third use at least one social media app or site (37%). And then in later childhood – from the age 10 to 17, most children own a mobile smartphone (91%), go online (100%), and use social media (90%).[footnote 59]
We know that as children get older they are spending more time on social media and much less on other important activities, like sport and exercise. One recent poll by Public First suggests that in year 5-6, only 2% of children are doing no exercise or sport and 36% are spending no time on social media. By year 12-13, 9% are doing no exercise or sport and just 1% are spending no time on social media (compared to 47% of year 12-13 spending over 6 hours a week on it).
Box 5.2. In January, we asked the Children’s Commissioner for England, Dame Rachel de Souza, and Professor Russell Viner to review the currently available advice on early years screentime and produce new government guidance for parents of under 5s. The new guidance will be published in April.
As touched on in Chapter 2 - the evidence base on older children’s screentime and impacts on mental health and wider wellbeing are mixed, but that does not mean that it does not exist. New evidence is constantly emerging and the government has commissioned a new trial co-led by Dr Lewer and Professor Orben on limiting screentime[footnote 60]involving around 4,000 students from ten Bradford secondary schools across years 8, 9 and 10, covering ages 12-15. The trial will launch in the spring and summer of 2026 with an initial feasibility study and accompanying report, followed by a full study taking place across the autumn and winter lasting a total of 6 weeks. Researchers are aiming to have data analyses completed by early summer 2027.
While we continue to closely monitor the emerging evidence base on the causal relationship between screentime, social media and children’s health, the Department for Education will build on the new guidance for under 5s and produce screentime guidance for parents and carers of children aged 5 to 16.[footnote 61]
We believe that screentime guidance for parents and carers for this age group would be beneficial because online risks do not diminish with age; rather, they evolve in complexity. At this stage, parental support may be less about imposing limits and more about fostering informed conversations, promoting digital resilience, and encouraging balanced habits. Clear, age-appropriate guidance will help parents and carers navigate the balance between respecting growing independence and providing appropriate safeguards, equipping families to support young people as they approach adulthood in an increasingly digital world. This includes how EdTech tools can provide valuable opportunities for children’s learning, creativity, and confidence.
This guidance will also reflect the outcomes of this consultation.
We know that children model the behaviour of the adults around them. Ofcom research has found that while a third (33%) of online 8-17s think that their screentime is too high, of these, just over half (52%) of these children also think that their parents’ screentime is too high.[footnote 62] This is why we will be asking in the dedicated parent’s survey about parents‘ phone use.
Consultation questions (52 to 54)
52. To what extent do you agree or disagree with the following statement: “Parents should have control over the online experiences of their children”
a. Strongly agree
b. Somewhat agree
c. Neither agree nor disagree
d. Somewhat disagree
e. Strongly disagree
f. Don’t know/ Prefer not to answer
Please explain the reasoning behind your answer.
53. How should this level of control change for children of different ages? For example, a 16-year-old and an 11-year-old.
54. What would help parents and carers to more effectively use parental controls? For example, more information on how to do this on purchase of a phone, help from platforms on how to set up, or greater standardisation across tools.
Pilots
While this consultation is running, we will also be piloting certain interventions across the country, giving parents, carers and children the opportunity to see for themselves how, for instance, an overnight curfew or a social media ban feels. We want our final decisions to be informed by children who’ve had experience of this sort of restriction as well as from their parents and carers.
Piloting these interventions will also give us the opportunity to better understand how practical they’ll be to implement. Are they easily circumvented? How do they affect children’s sleep, mood and physical activity? Do parents and carers understand how their children are accessing social media? Having a direct understanding of the answers to questions like these will be important in informing our decisions.
After the consultation closes: summary of next steps
The government’s response to this consultation will be published in due course following its closure on 26 May 2026. This will take all responses submitted to this consultation into account and will be based on careful consideration of the points made in responses.
We will summarise all responses and publish this summary on GOV.UK. The summary will include a list of names or organisations that responded, but not people’s personal names, addresses or other contact details.
Confidentiality and data protection
Information you provide in response to this consultation, including personal information, may be disclosed in accordance with UK legislation (the Freedom of Information Act 2000, the Data Protection Act 2018 and the Environmental Information Regulations 2004).
If you want the information that you provide to be treated as confidential please tell us, but be aware that we cannot guarantee confidentiality in all circumstances. An automatic confidentiality disclaimer generated by your IT system will not be regarded by us as a confidentiality request.
We will process your personal data in accordance with all applicable data protection laws. See our personal information charter on GOV.UK.
Quality assurance
This consultation has been carried out in accordance with the government’s consultation principles.
If you have any complaints about the way this consultation has been conducted, please email: OSA_consultation@dsit.gov.uk.
-
Tablatin et al. (2023), Using Minecraft to cultivate student interest in STEM ↩
-
Ofcom (2025), Online nations report, p.94 ↩
-
Ofcom (2025), Online nations report, p.90 ↩
-
DSIT analysis of published interactive data ↩
-
Smartphone Free Childhood (2026) ↩
-
Family Online Safety Institute (2025), Children & Parents’ Perception of Social Media and Classroom Smartphone Bans in the U.S and Australia ↩
-
Children’s Data Lives report (2024) ↩
-
This is a wide definition and in practice includes most online services that are provided for commercial purposes, not just social media ↩
-
Sonia Livingstone et al (2018), An evidence review of Children’s data and privacy online, pg. 17 ↩
-
Ofcom (2025), Children and Parents: Media Use and Attitudes Report, page 23. ↩
-
ICO (2025), Children’s Code Strategy progress update ↩
-
UK Safer Internet Centre (2024), Sextortion Report, p.18 ↩
-
NCA (2025), Child Sexual Abuse. ↩
-
Ofcom, Protecting People from Illegal Harms Online, 2024, p. 71 ↩
-
The U.S. Surgeon General’s Advisory (2023), Social Media and Youth Mental Health, page 10 ↩
-
CMO (2019), Commentary on screen-based activities and children and young people’s mental health and psychosocial wellbeing, page 7 ↩
-
The U.S. Surgeon General’s Advisory (2023), Social Media and Youth Mental Health, page 10 ↩
-
Top trends from our latest look at UK children’s online lives, Children’s Passive Online Measurement study (2025), each panellist completed 28 consecutive days of study, fieldwork November 2024 – March 2025. Base: 692 ↩
-
Children and Parents: Media Use and Attitudes Report (2025), p.26 ↩
-
Children and Parents: Media Use and Attitudes Report (2025), p.37 ↩
-
Children and Parents: Media Use and Attitudes Report (2025), p.37 ↩
-
Feasibility Study of Methods and Data to Understand the Impact of Smartphones and Social Media on Children and Young People ↩
-
Commentary on screen-based activities and children and young people’s mental health and psychosocial wellbeing ↩
-
Children’s Media Lives (2025) ↩
-
Commentary on screen-based activities and children and young people’s mental health and psychosocial wellbeing, page 7 ↩
-
Ofcom (2025), Children’s Risk Assessment Guidance and Children’s Risk Profiles, pg. 48. ↩
-
Social media, misinformation and harmful algorithms (2025) ↩
-
Disrupting Social Media Habits: a Field Experiment with Young Danish Consumers (2025) ↩
-
Safer Internet Day 2026 Report: Exploring the safe and responsible use of AI ↩
-
Maples et al. (2024), Loneliness and suicide mitigation for students using GPT3-enabled chatbots ↩
-
Online nations report 2025, p.44 ↩
-
Ofcom (2025), Online nations report 2025, p.5 ↩
-
Young people’s use of VPNs (2025) ↩
-
Guidance on Communicating policies for prohibiting the use of mobile phones in schools to parents (2026) ↩
-
G Hirata (2022), Play to Learn: The Impact of Technology on Students’ Math Performance, Journal of Human Capital: Vol 16, No 3 ↩
-
Mansfield et al. (2024), The Case for a Smartphone Ban in Schools ↩
-
Communicating policies for prohibiting the use of mobile phones in schools to parents (2026) ↩
-
NASUWT (2024), The Big Question Survey. ↩
-
DSIT (2023), Year 3 Online Media Literacy Action Plan (2023/24) ↩
-
Parentzone (2026), Be Internet Legends: 10 million children trained ↩
-
Bergamini (2025), DISMISSing Disinformation: Lessons from a Groundbreaking Digital Media Literacy Campaign ↩
-
Ofcom (2025), Media Nations Report, pg. 22 ↩
-
A trusted adult is defined by Young Minds as an adult that is chosen by the young person as a safe figure that listens without judgment, agenda or expectation, but with the sole purpose of supporting and encouraging positivity within a young person’s life. They are normally outside of familial relations with the child. They are normally not related to the child. Defining a Trusted Adult (2024) ↩
-
DSIT analysis of published interactive data ↩
-
DSIT (2025), Parental Media Literacy, section 6: Parental Controls ↩
-
DSIT analysis of published Ofcom interactive data (2025) ↩
-
University of Cambridge (2025), Thousands of UK schoolchildren to take part in major study of social media use and teen mental health ↩
-
Ofcom (2025), Children and Parents: Media Use and Attitudes Report, page. 37 ↩