Press release

New protections for children and free speech added to internet laws

Online Safety Bill to include stronger protections for children, with platforms forced to be clearer with parents about dangers

This was published under the 2022 to 2024 Sunak Conservative government
A graphic with 'Online Safety Bill' across the centre

New internet safety laws will go further than before to shield children and protect free speech online

  • Legal but harmful provisions to be replaced with new duties to boost free speech and increase accountability of tech firms
  • Tougher measures to be added to protect women and girls from controlling or coercive behaviour

New internet safety laws will go further than before to shield children and protect free speech online, thanks to improvements proposed by the government.

Any incentives for social media firms to over-remove people’s legal online content will be taken out of the Online Safety Bill. Firms will still need to protect children and remove content that is illegal or prohibited in their terms of service, however the Bill will no longer define specific types of legal content that companies must address.

This removes any influence future governments could have on what private companies do about legal speech on their sites, or any risk that companies are motivated to take down legitimate posts to avoid sanctions.

New measures will also be added to make social media platforms more transparent and accountable to their users, as a result of amendments the Government will propose.

The changes will offer users a ‘triple shield’ of protection when online: social media firms will be legally required to remove illegal content, take down material in breach of their own terms of service, and provide adults with greater choice over the content they see and engage with.

Parents and the wider public will benefit from new changes to force tech firms to publish more information about the risks their platforms pose to children so people can see what dangers sites really hold.

Firms will be made to show how they enforce their user age limits to stop kids circumventing authentication methods and they will have to publish details of when the regulator Ofcom has taken action against them.

Digital Secretary Michelle Donelan said:

Unregulated social media has damaged our children for too long and it must end.

I will bring a strengthened Online Safety Bill back to Parliament which will allow parents to see and act on the dangers sites pose to young people. It is also freed from any threat that tech firms or future governments could use the laws as a licence to censor legitimate views.

Young people will be safeguarded, criminality stamped out and adults given control over what they see and engage with online. We now have a binary choice: to get these measures into law and improve things or squabble in the status quo and leave more young lives at risk.

Today’s announcement refocuses the Online Safety Bill on its original aims: the pressing need to protect children and tackle criminal activity online while preserving free speech, ensuring tech firms are accountable to their users, and empowering adults to make more informed choices about the platforms they use.

It follows confirmation that the Bill will include new measures to make significant changes to the UK’s criminal law to increase protections for vulnerable people online by criminalising the encouragement of self-harm and the sharing of people’s intimate images without their consent.

To make sure the Bill’s protections for adults online strike the right balance with its protections for free speech, duties relating to “legal but harmful” content accessed by adults will be removed from the legislation and replaced with the consumer-friendly ‘triple shield’.

The Bill will instead give adults greater control over online posts they may not wish to see on platforms. If users are likely to encounter certain types of content - such as the glorification of eating disorders, racism, anti-semitism or misogyny not meeting the criminal threshold - internet companies will have to offer adults tools to help them avoid it. These could include human moderation, blocking content flagged by other users or sensitivity and warning screens.

The legal but harmful measures will be replaced with new duties which strengthen the Bill’s free speech requirements on major online platforms to make them more accountable for their policies. It will explicitly prohibit them from removing or restricting user-generated content, or suspending or banning users, where this does not breach their terms of service or the law. In addition, firms will need to have clear, easy to understand and consistently enforced terms of service.

It comes as new polling from Ipsos reveals that 83 per cent of people think social media companies should have a duty to protect children who are using their platforms (only 4 per cent disagree). Eight in ten people (78 per cent) want social media companies held accountable for keeping underage children off their platforms (only 7 per cent disagree).

There is overwhelming public backing for action. Eight in ten people (81 per cent) think the government should make sure social media companies protect children when they are online and 77 per cent think social media companies should be punished if they don’t protect children.

Sanjay Bhandari, Chair of Kick it Out, said:

Users of social media have benefitted from a right that does not exist in the real world. Not only do they have freedom of speech but they have the freedom to force you to hear it. People who play, watch and work in football are often the victims of such vicious trolling

We welcome the principle of extending the user empowerment provisions in the Bill to close this loophole. Social media companies will need to make available technology that enables each of us to have the online experience we desire. We shall review the amendments to the Bill in detail but encourage parliamentarians to move quickly.

The Bill is due to return to Parliament next week. The first amendments have been tabled to the Bill in the Commons for Report Stage on 5 December. Further amendments will be made at later stages of the Bill’s passage.

As well as making larger tech companies publish a summary of their risk assessments concerning the dangers their platforms pose to children, other moves to boost transparency and accountability include giving Ofcom a new power to require platforms to publish details of enforcement action it takes against them.

Another set of amendments will boost protections for women and girls online by adding the criminal offence of controlling or coercive behaviour to the list of priority offences in the Bill. This means platforms will have to take proactive steps, such as putting in measures to allow users to manage who can interact with them or their content, instead of only responding when this illegal content is flagged to them through complaints

In addition, the Victim’s Commissioner, Domestic Abuse Commissioner and Children’s Commissioner will be added as statutory consultees in the Bill, meaning Ofcom must consult with each when drafting the codes tech firms must follow to comply with the Bill.

Dame Rachel De Souza, Children’s Commissioner for England, said:

I am pleased that Government is bringing back the Online Safety Bill to Parliament. This landmark legislation is a once-in-a-lifetime opportunity to protect all children online, particularly the most vulnerable. “That’s why I am glad that the Children’s Commissioner is now recognised on the face of the Online Safety Bill as a statutory consultee to Ofcom’s codes of practice. This will enable me, in my unique position as representative of children’s rights and views, to oversee the codes which tech firms must follow to comply with the Bill – ensuring that children’s views and experiences are fully understood.

We cannot allow any more children to suffer. The loss of children by suicide, after exposure to hideous self-harm and suicide content, are tragic reminders of the powerful consequences of online material. I am determined to see this Bill pass through Parliament. I will work to ensure that children’s voices and needs underpin each stage of the legislative process. I look forward to us all getting behind such a crucial moment to protect children online.

Campaigner Lucy Alexander said:

We live in an online world where bullying is 24/7. Young people are bombarded by harmful content online and there is no room for escape. It is on their phones, in their bedrooms and with them on their way to school.

My son Felix was driven to suicide at the age of 17 due to the barrage of bullying he experienced online. This is the reason why I am on a mission to make sure no other child feels as much pain as he did. One death is one too many.

The Online Safety Bill is a step in the right direction, it will hold social media accountable for protecting children online. The new changes to the bill will also see social media firms forced to publish risk assessments so that parents can see for themselves the dangers and risks that face children on these sites. It is time to prioritise the safety of our children and young people online.

ENDS

FURTHER BACKGROUND INFORMATION

‘Legal but harmful’ provisions

The Online Safety Bill as drafted caused significant concern amongst free speech groups and members of the public, who argued that changes were necessary to ensure that legal free speech was not curtailed by creating a quasi-legal category of speech online. The Digital Secretary set out to resolve these issues while protecting elements of the Bill that pertained to children or vulnerable groups.

The concerns focused on the potential for the Bill’s adult safety duties to incentivise companies to over-remove legal but potentially harmful content. Ministers will therefore remove these duties from the Bill.

The government will bring in new overarching transparency, accountability and free speech duties for Category 1 services that result in stronger action to protect children, as well as greater choice for adult consumers over the types of content they see.

Companies will not be able to remove or restrict legal content, or suspend or ban a user, unless the circumstances for doing this are clearly set out in their terms of service or are against the law. If they do remove any content or ban a user, they’ll have to offer users an effective right of appeal. This will protect against companies arbitrarily removing content or banning users, and provide due process if they do.

As private companies, tech platforms will remain free to set any terms of service they wish. Companies already have robust and in many cases extremely detailed, comprehensive policies in place prohibiting abuse and other harmful content, but these aren’t always properly enforced and many groups have raised concerns about the impact this has on users. The changes mean companies must keep their promises to users and consistently enforce their user safety policies once and for all.

For example, if a platform says in its terms of service that it does not allow a particular type of legal content, such as racist and homophobic abuse or harmful health disinformation, then it must act to tackle it. Ofcom will be empowered to take unprecedented action, including levying fines totalling up to 10% of annual turnover.

Adults will still be protected from a wide range of criminal activity and illegal content online. This includes the most prevalent and serious online hate crime as well as fraud, assisting suicide, threats to kill, harassment and stalking, the sale of illegal drugs or weapons and revenge pornography.

The government has also confirmed it will use the Online Safety Bill to create a new criminal offence of assisting or encouraging self-harm online. This means that one of the main types of content campaigners are concerned about, following cases such as the death of Molly Russell, will now be covered and effectively tackled by platforms under their illegal content duties in the Bill.

Adults will also benefit from a range of other improvements under the Bill. It will force the largest and most popular social media sites to give users tools to tailor their online experiences - including who can communicate with them and what kind of content they see.

Adults who do not want to see certain types of content - including legal content relating to suicide, self-harm or eating disorders, or content that is abusive, or that incites hatred, on the basis of race, ethnicity, religion, disability, sex, gender reassignment or sexual orientation - will be given access to tools which will reduce the likelihood of this appearing on their social media feeds.

This gives users greater personal choice, without affecting freedom of expression. As the sophistication of technology improves, so too will the tools that platforms must offer their users.

Adults will also benefit from the ability to block anonymous trolls - via tools to control whether they can be contacted by unverified social media users.

There will also be better reporting mechanisms for when adults see content that is illegal, harmful to children or should have been removed in line with terms of service, with platforms expected to process and resolve complaints more quickly.

Child safety duties

The strongest protections in the Bill are for children. Companies in scope will need to protect young people from content and activity posing a material risk of significant harm. This includes taking preventative measures to tackle offences such as child sexual exploitation and abuse (CSEA).

Ministers have already made changes to ensure all pornography providers prevent children’s access; to make clear platforms may have to use age assurance measures to comply with their duties; and to strengthen measures to tackle CSEA.

Today the Digital Secretary is confirming further protections for children.

An amendment will force social media platforms to publish their risk assessments on the dangers their sites pose to children. Previously, the Bill required platforms to carry out these assessments but not to proactively publish them.

Ofcom will be given the power to make companies publish details of any enforcement notices they receive from the regulator for breaching their safety duties under the Bill.

A further amendment will make platforms’ responsibilities to provide age appropriate protections for children clearer. Where platforms specify a minimum age for users, they will now have to clearly set out and explain in their terms of service the measures they use to enforce this, such as age verification technology.

Harmful communications offence

A number of parliamentarians and stakeholders raised concerns that the introduction of a harmful communications offence, which is currently in the Bill.

If introduced, the offence could capture communications where someone sends or posts a message with the intention of causing harm that amounts to at least ‘serious distress’ without a reasonable excuse. Stakeholders and members of the public have expressed concerns this offence has the potential to produce unintended consequences on freedom of expression as they believe it would criminalise legal and legitimate speech on the basis that it has caused someone offence.

The Digital Secretary has decided to remove the harmful communications offence from the Bill to ensure the Bill’s measures are proportionate and do not unduly criminalise content that some may find offensive. Platforms can, and in most cases do, already have terms of service that relate to such content and so would be captured as part of the ‘triple shield’. .

To retain protections for victims of abuse, the government will no longer repeal elements of the Malicious Communications Act and Section 127 of the Communications Act offences, which means the criminal law will continue to protect people from harmful communications, including racist, sexist and misogynistic abuse.

The other new offences on false and threatening communications will remain in the Bill. The false communications offence will protect individuals from any communications where the sender intended to cause harm by sending something knowingly false, while the threatening communications offence will capture communications which convey a threat of serious harm, such as grievous bodily harm or rape.

NOTES TO EDITORS

  • Ipsos interviewed a representative sample of 1,032 adults aged 16-75 across the United Kingdom. Interviews were conducted online from 4th to 8th November 2022. Quotas were set and data weighted using demographic variables to match the profile of the population.

Updates to this page

Published 28 November 2022