UK Safety Tech Sector: 2025 analysis
Published 10 April 2026
Ministerial Foreword
The Online Safety Act, which received Royal Assent in 2023, has now moved into its enforcement phase with Ofcom implementing the regulatory framework to ensure safer online spaces. This marks a significant milestone. We must also recognise the important role of technology and, in particular, the role the UK’s world-leading safety tech sector plays in facilitating safer online experiences, and protecting users from harmful online content, contact and conduct.
Our analysis has been tracking the UK’s safety tech sector annually since the publication of the first Safer Technology, Safer Users report in 2020. This set out the UK’s role in developing solutions that are being used worldwide to protect users and to detect and remove illegal content and reduce online harms.
Since that time, the sector has experienced ongoing growth driven by UK firms committed to making online environments safer. Our latest research reveals that there are now 145 safety tech businesses operating in the UK, with revenues having reached £704 million in 2024.
The UK is home to a world leading safety tech sector that offers expertise in age assurance, brand and platform safety, digital forensics, content moderation, filtering, and combating issues such as fraud and disinformation. This range of technologies solidifies the UK’s position at the forefront of tackling online harms and equips our digital economy and tech platforms with the right tools to ensure user safety.
As technology evolves at an unprecedented pace, it is crucial that we remain agile in protecting UK citizens. The UK safety tech sector has risen to this challenge, establishing itself as one of the fastest growing and socially impactful tech sectors. The sector is also adapting to emerging challenges, particularly in developing trustworthy AI systems that prioritise user safety and tackle new forms of harms.
The 2025 Safety Tech analysis continues to demonstrate the confidence in the UK’s safety tech sector and its potential for continued growth and innovation, which the government is committed to supporting by taking forward a broad range of initiatives to foster innovation across the safety tech sector. We will also continue to work closely with international partners to promote coordinated action. The safety tech sector is a key part of ensuring that the UK remains both the safest place to be online, and a leading location for developing innovative solutions to keep users safe online. By working together, we will build a safer digital world for all.
Minister Narayan
Parliamentary Under-Secretary of State (Minister for AI and Online Safety)
Key Findings
This section sets out the key findings from this year’s Safety Tech Sectoral Analysis.
Sectoral Growth
-
The UK’s safety tech sector has continued to grow, with 145 (+2) dedicated safety tech organisations identified, offering a wide range of products and services across the safety tech taxonomy.
-
The sector has demonstrated continued revenue growth, with estimated total revenue reaching £704 million in 2024, marking a 13% increase (+£81 million, nominal) from the previous year’s figure of £623 million.
-
While growth has moderated from previous years’ rates of approximately 30%, the sector has more than trebled in size since 2019 (from £226 million to £704 million), representing a compound annual growth rate of approximately 26% over this 5 year period.
-
The sector currently employs approximately 3,900 people in the UK. This figure is unchanged compared to the previous year (3,900 FTEs).
-
The sector demonstrates strong potential for further growth and innovation, with the majority (59%) of UK safety tech providers estimated to currently export products and services internationally.
A Maturing Ecosystem
-
The sector shows signs of maturation, with increasing consolidation through acquisitions and some exits balanced by new market entrants.
-
The composition of the sector is evolving, with 27 medium-sized firms in 2025 (nearly doubled from 15 in 2023) and an increase in large enterprises from 1 to 3 over the same period.
-
Many providers are expanding their offerings across multiple taxonomy categories, with most now operating in 2 or 3 different areas, representing a broadening of services, partnerships, and responding to customer demand for integrated solutions.
A challenging investment and funding landscape
-
The investment climate remains challenging. £24 million was raised across 15 deals in 2024, which reflects a decrease from the previous year of 43% in deal value (£42 million in 2023), and 17% in deal volume (18 deals in 2023).
-
Despite lower investment values, the less significant decline in deal volumes suggests continued investor interest albeit with more cautious investment amounts.
-
Notable early-stage investments in 2024 in companies like OpenOrigins, Refute, Nisien.ai, Egregious, and Arwen.ai indicate ongoing support for innovative safety tech solutions.
Emerging trends in business models and routes to market
-
The implementation and enforcement of the Online Safety Act continue to drive demand for safety tech solutions, with Ofcom’s enforcement programme now assessing industry compliance.
-
The rise of generative AI is creating both new challenges and opportunities, with increased demand for solutions to combat AI-generated harmful content such as deepfakes and misinformation.
-
Data access challenges persist, with changes to platform Application Programming Interfaces (APIs) and increased restrictions affecting safety tech providers’ ability to develop and train effective solutions.
-
The emergence of open-source tools and increased in-house development by platforms may impact demand for commercial third-party solutions.
-
Public sector procurement of safety tech solutions shows growth, with £48 million spent across 122 contracts in 2024, indicating increased government recognition of the sector’s importance. This reflects public spend on safety tech solutions approximately doubling between 2023 to 2024 (from £24 million to £48 million).
-
Collaboration and partnerships are driving growth in the sector, with over 800 partnerships mapped between safety tech providers and their clients. This highlights the increasing adoption of safety tech solutions across various industries. Safety tech is increasingly integrated across diverse sectors, including banking, retail, government, education, and media, highlighting its importance throughout the digital economy.
1. Introduction
The Department for Science, Innovation and Technology (DSIT) has tracked the growth of the UK’s safety tech sector since May 2020, through the annual ‘Safety Tech Sectoral Analysis’ research project.
This research provides an overview of the UK’s capabilities in its online safety technology (‘safety tech’) sector, and has identified significant growth in recent years, driven by a range of highly innovative companies focused on tackling online harms through technical solutions.
This report marks the sixth annual review of the sector’s performance and finds further evidence of a high-growth and high-potential sector.
Definition and Scope:
In order to identify relevant companies within the sector, the following definition is used:
“Safety Tech providers develop technologies or solutions to facilitate safer online experiences, and protect users from harmful content, contact or conduct.”
This focuses on firms that:
-
Often work closely with law enforcement, to help trace, locate and facilitate the removal of illegal content online
-
Work with social media, gaming, and content providers to identify criminal, harmful or toxic behaviour on their platforms
-
Monitor, detect and share online harm threats with industry and law enforcement in real-time
-
Develop trusted online platforms that are age-appropriate and provide parental reassurance for when children are online
-
Use technology to identify, prevent, and mitigate real-world harmful incidents from occurring, or respond to events
-
Detect, disrupt and protect users from fraudulent advertisements
-
Verify and assure the age of users
-
Actively identify and respond to instances of online harm, bullying, harassment and abuse
-
Help organisations to filter, block and flag harmful or illegal content at a network or device level
-
Detect and disrupt false, misleading or harmful narratives (mis- and disinformation)
-
Advise and support a community of moderators to identify and remove harmful content
Perspective Economics has been commissioned by the Department for Science, Innovation and Technology (DSIT) to conduct an updated safety tech sectoral analysis exercise.
This report explores the number of businesses offering safety tech products and services and provides an updated market estimate of the size of the sector in the UK, measured through revenue, employment, and external investment.
It remains consistent with the updated taxonomy set out within the 2024 Safety Tech Sectoral Analysis, to ensure there is a consistent longitudinal evidence base to help track the long-term growth of the UK safety tech sector.
Team and Acknowledgements
DSIT and Perspective Economics would like to acknowledge the consultees, nationally and internationally, who contributed to the development of this report through participation in consultations and survey activity with the research team.
The safety tech sector has consistently demonstrated significant growth each year in the UK. Further, it continues to support government, civil society and industry in tackling and mitigating the impact of harmful and illegal activity online and developing new solutions to novel and emergent harms.
Scope
This research seeks to identify providers of safety tech products or services, with a clear presence in the UK market (UK registered), and that are active and undertake commercial activity. For research purposes, the following are considered within this report; however, we recognise the broader contribution of many organisations involved within the wider online safety ecosystem.
For the purposes of this research, safety tech providers are defined as organisations which:
-
have a clear presence in the UK market (registered and active status)
-
demonstrate an active provision of commercial activity related to safety tech (e.g., through the presence of a website or social media)
-
provide safety tech products or services to the market (i.e., sell or enable the selling of solutions to customers)
-
have identifiable revenue or employment within the UK
Section 2 of this report sets out the type of organisations within scope and sets out the products and services typically offered by market providers.
Methodology
The methodology for this research is consistent with that set out in Appendix B of the ‘Safer Technology, Safer Users’ report (2024).
The research team uses the existing safety tech sector taxonomy and has applied additional markers to identify the range of products and services provided by safety tech companies. The research has identified 145 ‘dedicated’ companies using web data, financial data, procurement data, and through direct consultation with industry. These companies have been reviewed by the research team and confirmed as relevant to the safety tech sector.
All firms have been reviewed and enriched using:
-
Company Accounts: The research team has identified relevant financial metrics for each provider through the most recent UK financial accounts.
-
Web Data: The research team has reviewed company websites to identify key product and service offerings, locations and markets served, customers (where mentioned), and relevant staff and team sizes (relating to safety technology).
-
Survey and Consultation Activity: All providers were invited by DSIT, PUBLIC and Perspective Economics to take part in a short online survey in February 2025. This explored company performance and growth expectations. It also asked providers about data access, and the use of AI in their solutions.
-
Wider Datasets: Perspective Economics has established data partnerships with proprietary data providers to support the enrichment of data used for this study. This includes Beauhurst (a platform that tracks external fundraising and high-growth companies across the UK) and Tussell (a procurement database, which tracks contracts awarded by the public sector).
2. Defining the Safety Tech Sector
Background
Safety tech providers develop technologies or solutions to facilitate safer online experiences, and to protect users from harmful content, contact or conduct.
This definition was established in the baseline Safety Tech Sectoral Analysis (2020) report and expanded through a market taxonomy that provided scope to identify a range of products and services used to help make users safer online.
This ultimately distinguishes the safety tech sector from adjacent sectors. For example, there may arguably be some overlap between safety tech and fields such as cyber security, FinTech, and RegTech; however, we seek to identify firms that are distinct in terms of focus on online safety, compared to areas such as data security.
In 2023, we updated the ‘safety tech taxonomy’ to reflect the evolving breadth and depth of the sector, with minor changes to allow for time-series analysis. The taxonomy has been developed to illustrate the scope of the safety tech sector and has primarily been used to support DSIT mapping and tracking of safety tech firms. The research also recognises diverse use cases and technologies within the safety tech sector and acknowledges that many firms within the sector can provide products or services across the different levels of the taxonomy.
It is also recognised that several of the larger tech companies are actively involved in the production or development of safety tech solutions (e.g., Microsoft’s PhotoDNA, and AWS’ Rekognition Image Moderation API). However, as with previous iterations, these providers are not measured within this sectoral analysis, which is focused on dedicated third-party safety tech providers. The UK also has a particularly active community of charitable and representative organisations involved in tackling issues relating to online safety, that are also excluded from this sectoral analysis.
The following sections set out the market profile, number of providers, location, commercial revenue and employment activity, and investment in the UK safety tech sector.
3. Market Profile
Number of Safety Tech Providers
Using the safety tech definition and taxonomy, we have identified 145 active organisations dedicated to providing relevant safety tech products and services which are registered within the UK. However, we recognise that many firms in adjacent sectors such as AI and cyber security are increasingly adopting areas aligned to safety technology e.g. threat intelligence, responsible advertising and brand safety, responding to fraud etc.
Within the previous study (2024), we identified 143 active organisations. This means that the net increase (2) in the number of active safety tech companies in the UK has been modest in the last twelve months. However, review of the previous dataset suggests that:
-
130 safety tech organisations (91%) remain active and in scope;
-
13 safety tech organisations have been acquired, ceased trading, or no longer provide safety tech solutions to the UK market.
-
An additional 15 organisations have been identified as relevant, including both firms that have increased their focus on online safety provision, and new firms to market.
While the overall number of companies has remained relatively stable, the sector appears to be experiencing both consolidation (through acquisitions and dissolution) and new start-ups (through new entrants and pivots) across different use cases. There are several market drivers underpinning this, such as mergers and acquisitions and demand for content moderation, in addition to tackling new and emerging online harms. This is explored further in Section 7.
Products and Services
For the 145 organisations identified for commercial analysis, we have identified company descriptions of what they offer by means of web data and direct consultation. The nature of this sector means that some organisations provide diverse products and services - for example, content moderation, brand safety, and advisory services.
We have identified the best fit of each of the commercial organisations against the taxonomy categories to illustrate the overall sector composition. We have also applied multiple markers to each firm to enable analysis of firm products and services across multiple categories. This is set out within the final column in the table below.
Table 1: Taxonomy Classifications of the UK Safety Tech Sector
| Taxonomy Classification | Short Definition | Number of Firms (Best Fit) | Number of Firms (with some offering) |
|---|---|---|---|
| System-Wide Governance | Automated identification and removal of illegal content: use of technology to identify and enable the removal of illegal child sexual exploitation and abuse (CSEA) material, and terrorist content including imagery and video. | 16 (11%) | 21 (14%) |
| Platform Level | This includes organisations that support content moderation through identifying and flagging potentially illegal content or conduct, such as grooming, hate crime, harassment or suicide ideation or harmful content or conduct which breaches site T&Cs, such as cyberbullying, extremism or advocacy of self-harm. They also support identification and response to fraudulent activity or behaviour. Further, they may assist in reducing moderators’ own exposure to harmful content. | 37 (26%) | 79 (54%) |
| Age Orientated Online Safety | Enabling age-appropriate online experiences through use of age assurance and age verification services to limit childrens’ exposure to harmful content, or development of child-safe content. | 18 (12%) | 60 (41%) |
| User Protection | User, parental or device-based products that can be installed on devices to help protect the user from harm. | 22 (15%) | 42 (30%) |
| Network Filtering | Products or services that actively filter content, through blacklisting or blocking content perceived to be harmful. This can include solutions provided to schools, businesses, or homes to filter content for users. | 14 (10%) | 20 (14%) |
| Information Environment | Flagging of content with false, misleading and/or harmful narratives, through the provision of fact-checking and disruption of disinformation (e.g. with trusted sources). | 21 (14%) | 36 (25%) |
| Online Safety Professional Services | Advisory support with implementing technical solutions. Enabling the development of safer online communities and embedding safety-by-design | 17 (12%) | 47 (32%) |
Since the previous study, we note growth of provision in several taxonomy areas, with the highest increases in firms offering some products or services aligned to User Protection (+9pp) or the Information Environment (+6pp). This suggests increased focus on both device-level safety solutions (e.g. on-device filtering, browser extensions etc), and technologies addressing disinformation and supporting content provenance.
We also observe a small reduction in the number of firms offering some form of age orientated online safety (from 66 firms to 60, driven by the dissolution of a small number of micro entities). Whilst this is a small reduction, this is the first noted reduction in any study across taxonomy areas and highlights some evolution within the commercial market.
A key finding from this analysis is that many providers are expanding their offerings across multiple taxonomy categories, with the majority (77%) now offering solutions in at least 2 different taxonomy areas. This represents a maturation of the market, where providers are responding to customer demand for integrated solutions by developing comprehensive offerings.
This trend toward diversification suggests safety tech providers are broadening their capabilities through both internal development and strategic partnerships, enabling them to deliver more comprehensive ‘managed solutions’ that address multiple aspects of online safety. Through expansion or through formal partnerships, firms can increasingly position themselves as full-service providers, responding to preferences for integrated solutions that can tackle the multifaceted nature of online harms. The Safety Tech as a Service (STaaS) model continues to evolve, with providers offering orchestration and integration of multiple safety technologies. Rather than simply outsourcing individual functions like content moderation, platforms can now access comprehensive safety ecosystems that adapt to their specific risk profiles, user demographics, and regulatory environments. This may enable more dynamic scaling of safety resources and allows smaller platforms to implement enterprise-grade safety measures that would otherwise be prohibitively complex to develop internally, and to apply these across multiple jurisdictions in a compliant way.
The Online Harms and Threat Landscape:
Within the original Online Safety Technology Sector Taxonomy (2020), this identified several areas of online harm that safety tech providers can help address. Several of these focused upon tackling illegal harms, such as abuse and exploitation, and ensuring appropriate age-related safeguard for internet use. Whilst the regulatory and legal landscape has sought to address several of these harms, there are new and novel areas that providers are considering in response to an increasing harms surface. Whilst some of the harm areas have been long-established, the proliferation of new technologies among ‘bad actors’ may mean that areas such as fraud, scams, and online hate may persistent across online domains. Review of online safety tech provider products and services highlights some new and sustained areas of focus among providers, including:
-
Brand Safety and Reputation: Areas such as marketing, social media, and brand awareness are highly valuable markets for a wide range of sectors. For example, it may only take one social incident online, such as harmful content on a social media feed, or a concerted campaign against a certain brand or company to cause significant reputational or financial damage. As such, protecting brands from appearing alongside inappropriate or harmful content, and supporting companies that understand their digital footprint is a significant area of growth for safety tech firms.
-
Fraud detection: Aligned to this is the need to help both platforms and individuals identify and prevent various forms of online fraud. This can include fraudulent behaviour, counterfeit goods, IP infringement, or customers being exposed to bad actors (e.g. online marketplaces where transactions are not as expected). This means there is considerable market demand for behavioural identification and response to bad actors (as-a-service) to help detect and mitigate the impact of malicious users and their activities. This can also require the use of multiple safety technology domains such as identity and age verification, multi-model content moderation, and content provenance.
-
AI for fact-checking and content provenance: The increasing spread of misinformation, deepfakes, and manipulated content online has created a growing market demand for AI-powered solutions that can verify the accuracy and authenticity of content. Safety tech businesses that develop tools leveraging machine learning, computer vision, and natural language processing to detect disinformation, manipulated media, and synthetic content may be well-positioned for market growth. Further, there is evidence of some safety tech firms engaging with areas such as LLM safety and scoring, reflecting the need for independent review of model safety, as well as safety of associated inputs and outputs.
-
Widening safety provision: Within this year’s study, we also find some evidence of safety technologies being used for wider groups. For example, Altia has recently worked with the RSPCA to improve its digital evidence technology to help secure court-ready evidence to rescue animals from cruelty; Phylax announced a partnership with the Alzheimer’s Society to help protect people from making repeated or misguided purchases when shopping online; and in early 2025, Egregious secured $1 million in seed funding to help counter AI-powered deception, disinformation, and polarisation.
4. Location
This section sets out the location of safety tech providers, suggesting that nearly half (47%) have a registered or trading presence outside of London and the South East.
As set out in previous studies, there are also identifiable safety tech clusters in areas such as Leeds, Edinburgh and Cambridge.
Figure 2: Location Analysis of the UK Safety Tech Sector
Source: Perspective Economics, n = 145 dedicated providers
5. Estimated Revenue and Employment
This section outlines how the safety tech sector has grown in recent years, including estimates for annual revenue and employment.
-
We estimate that total UK safety tech sector revenues for the last financial year (modal: FY ending 2023/2024) have reached £704 million. This marks an increase of £81 million (+13%) on last year’s estimated revenue figure of £623 million.
-
We note that many organisations are at ‘pre-revenue’ or micro stage and therefore do not provide full annual accounts. We therefore estimate total sectoral revenue through a mix of known company accounts, direct consultations, and company-level estimation as appropriate.
-
Further, the sector’s revenue has more than trebled since 2019 (from £226 million to £704 million), representing a compound annual growth rate of approximately 26% over this 5 year period. This growth, even amid global economic uncertainties, underscores the ongoing demand for safety tech solutions and their integration into the wider digital economy.
-
The revenue growth also suggests a maturing market that is transitioning from rapid early growth to more sustainable expansion. This suggests a move beyond initial adoption phases and a greater focus on consolidation, innovation and product enhancement, and wider operational efficiency.
-
We also estimate that across the UK safety tech sector, there are currently c. 3,900 Full-Time Equivalent (FTE) employees. This reflects no notable change since last year’s report (3,900 FTEs). This is consistent with global employment estimates for safety tech, whereby several platforms have frozen or reduced headcount in the last twelve to eighteen months.
However, limited FTE employment growth despite revenue increases may suggest some productivity gains within the sector. Companies appear to be generating more revenue per employee, driven by factors such as increased automation and AI implementation in safety tech solutions, economies of scale as companies expand their customer bases, maturing business models, and a greater strategic focus on added and high-value contracts. This sustained growth, even during a period of wider changes, continues to position safety tech as a strategic sector within the UK’s wider digital economy.
The following subsections set out estimated company size, revenue, and employment.
Estimated Company Size
Of the 145 organisations identified for commercial analysis, the majority are micro (<10 employees) or small firms (<50 employees) representing 54% and 26% of firms respectively.
However, when analysing size composition, we find an ongoing trend that more firms are demonstrating evidence of maturity year on year.
The number of medium-sized firms has nearly doubled over the past 2 years, growing from 15 in 2023 to 27 in 2025, while large enterprises have increased from just 1 to 3 over the same period.
However, we note that the overall percentage of non-micro providers has decreased from 53% to 46% this year, which may suggest a challenging landscape among smaller providers in the UK. This likely reflects the challenging commercial landscape for smaller providers who must navigate increasing competition, regulatory and data challenges, and potential funding constraints.
Table 2: Estimated Company Size of UK Safety Tech Firms (UK presence)
| Category | Definition | Number of Firms |
|---|---|---|
| Large | Employees > 250 And Turnover > £36 million or Balance sheet total > £18 million |
3 (2%) |
| Medium | Employees > 50 and ≤ 250 And Turnover > £10.2 million and ≤ £36 million or Balance sheet total ≤ £18 million |
27 (19%) |
| Small | Employees > 10 and ≤ 50 And Turnover > £632,000 and ≤ £10.2 million or Balance sheet total ≤ £5.1 million |
37 (26%) |
| Micro | Employees ≤ 10 And Turnover ≤ £632,000 or Balance sheet total ≤ £316,000 |
78 (54%) |
| Total | 145 |
Source: Perspective Economics analysis
Estimated Revenue
We estimate that in total, the safety tech sector generated £704 million in annual revenues in 2023. Figure 3 highlights how the sector has grown at a compound rate of 26% per annum since the baseline study. This means that the sector has more than trebled in size in the last 6 years.
However, the rate of growth has been smaller in 2024 than in previous years (+£81 million, +13%), suggesting the overall growth rate may taper as the sector reaches a level of operational maturity.
Figure 3: Safety Tech Sectoral Revenue (2019 – 2024, and projected to 2027)
| Year | Revenue |
|---|---|
| 2019 | £226 million |
| 2020 | £314 million |
| 2021 | £381 million |
| 2022 | £456 million |
| 2023 | £623 million |
| 2024 | £704 million |
| 2025* | £796 million |
| 2026* | £899 million |
| 2027* | £1,016 million |
Source: Perspective Economics sector revenue estimates (2019-24) and forecast
Estimated Employment
We estimate that there are approximately 3,900 people working within dedicated safety tech firms based in the UK. This has been estimated through company accounts, web data and consultation data.
This reflects no significant net change in people since last year’s report.
The global ‘International State of Safety Tech’ research (2024) also suggests there are currently 15,700 employees working directly (excluding freelancers) for safety tech firms globally, and that this figure reflects a reduction of 4% on 2023 levels. This also suggests, as with previous estimates, that the UK remains home to 1 in 4 (25%) of the global safety tech workforce.
6. Investment in Safety Tech Providers
This section sets out an overview of the investment landscape for the firms identified within this sectoral analysis, using the Beauhurst platform as an evidence source.
Beauhurst tracks announced and private investments, along with the performance of high-growth companies in the UK. It also monitors UK business participation within well-known business accelerator and incubator initiatives, and tracks where businesses have secured funding from public bodies such as Innovate UK.
Investment Activity to Date
Between 2018 - 2022, the volume and value of external investment within safety tech companies has grown significantly. However, the previous analysis also suggested that the investment landscape has become more challenging both for safety tech providers, and for tech platforms more generally.
In 2024, total external investment has fallen from £42 million to £24 million (a reduction of 43%). However, the volume of deals remains similar to the past 2 years (15 deals), suggesting an ongoing appetite among investors within safety tech firms, albeit with reduced deal values.
Whilst this is a challenging finding, this is consistent with wider investment trends within technology sectors e.g. the UK cyber security sector also experienced a reduction in investment values between 2023 - 2024 of 24%. The International State of Safety Tech (2024) research also suggests an expected decline in global safety tech investment, with deal volumes expected to reduce by a third, and deal values by two-thirds globally in 2024. As noted in these research reports, this is consistent with wider VC investment trends within a context of higher interest rates and more uncertain macroeconomic conditions compared to pre-2023.
Consultees highlighted several sector specific factors influencing investment, including investor awareness of the sector, ability for firms to identify the right product to market fit, the policy landscape, and the infrastructure available to support the sector in raising investment (e.g., business support and accelerators relevant to safety tech firms).
As this data covers the previous full year (2024), it is retrospective in nature and suggests that some of the challenges cited by safety tech providers have been borne out in the investment figures.
Figure 4: Safety Tech Investment Raised (2018 – 2024)
Source: Perspective Economics, Beauhurst
Example Deals in 2024:
-
OpenOrigins develops technology to ensure the authenticity of content and combat the spread of deepfakes. They raised $4.5 million in a seed round in November 2024, led by Galaxy Interactive[footnote 1]. The investment will support OpenOrigins’ global expansion as it aims to scale its media authenticity platform. Ensuring the authenticity of digital content can support sectors such as insurance secure and verify the provenance of claims and reduce fraud within the insurance process. It can also support sectors and use cases such as archives, watchdogs, marketplaces, and live video calls.
-
Refute focuses on combating misinformation and deepfakes through a platform that verifies content in real-time. They raised £2.3 million in a pre-seed round co-led by Playfair Capital and Episode1 Ventures in December 2024[footnote 2].
-
Nisien.ai develops advanced solutions for real-time online harm detection and content moderation. In 2024, Nisien.ai secured investment from the British Business Bank’s £130 million Investment Fund for Wales[footnote 3]. This investment, facilitated by Foresight Group and the Development Bank of Wales, will be used to scale operations, hire key talent, and accelerate R&D. Their flagship ‘HERO Detect’ platform is designed to identify and classify online harms, supporting the maintenance of healthy online environments.
-
Egregious helps organisations counter AI-powered deception, disinformation, misinformation, polarisation and harmful content online. In 2024, Egregious raised $1 million in a pre-seed round led by Fuel Ventures and Oxford Capital[footnote 4].
-
Arwen.ai offers AI-driven solutions for social media and content moderation, aimed at creating safer and more engaging online communities. Arwen.ai was the first UK startup to secure investment from Creative UK’s new Creative Growth Finance II Fund in early 2024. This will help Arwen scale their operations, enhance their product offerings, and expand their customer base, further developing tools that support positive community engagement. Arwen has worked with a wide range of sectors, including supporting dating apps such as Hinge[footnote 5], and sports clubs such as Fulham[footnote 6] with content moderation.
7. Supporting Growth in the Safety Tech Sector
The UK’s safety tech ecosystem is subject to a complex, and wide range of factors that will shape its growth trajectory.
Growth Drivers in Safety Tech:
Whilst the annual growth rate is lower than that of previous studies, several growth drivers continue to underpin the UK safety tech sector. These include:
- Regulatory requirements to address online harm: The implementation of the Online Safety Act 2023 was viewed as a watershed moment for online harms legislation. Ofcom first set out their implementation roadmap in October 2023, then updated in October 2024. This covers 3 main phases, relating to illegal harms; child safety, pornography and the protection of women and girls; and transparency, categorisation and additional duties for categorised services.
In early 2025, Ofcom has launched its enforcement programme to assess industry compliance, including protecting children from encountering pornographic content through the use of age assurance, and in March 2025, key duties came into force as part of the Online Safety Act. Ofcom has also set out a practical timeline of duties and actions for platforms to ensure online safety compliance.
This means that platforms must proactively manage harmful content through formal risk assessments, moderation measures, and appropriate age checks. Ofcom has also worked with industry and platforms through direct engagement, such as raising issues and working with platforms to improve safety measures, in addition to setting out guidance, access statements, and advice for a range of online services.
The combination of regulatory requirements, the risk of fines or action for non-compliance, and the wider support to help enable compliance and improve online safety on platforms will collectively drive additional demand for online safety solutions.
- The rise of generative AI, bringing new challenges and solutions: Generative AI is creating both new risks and opportunities for online safety provision. The rapid proliferation of AI-generated harmful content, such as deepfakes and misinformation, has led to platforms increasingly relying on specialised safety tech solutions. For example, Hive has created an AI-Generated Image and Video Detection API, which classifies images as human-created or AI-generated to help organisations detect manipulated media at scale. Firms such as ActiveFence and Spectrum Labs have also expanded their offerings specifically to combat generative AI harms, including review of content generated by LLMs and chatbots.
Ofcom has also recognised the urgency of addressing generative AI harms through initiatives like their recent Red Teaming for GenAI Harms paper, published in July 2024. This guidance promotes best practices for proactively identifying vulnerabilities in generative AI models and demonstrates Ofcom’s support and collaboration with safety tech providers on emerging AI threats.
However, Generative AI is expected to continue to lower the barriers to the creation of harmful, manipulated, or deceptive content, which has led to a significant increase in deepfakes, disinformation, and automated harassment at scale. As such, initiatives such as the Deepfake Detection Challenge, initiated by the Home Office, the Department for Science, Innovation and Technology, Accelerated Capability Environment (ACE) and the Alan Turing Institute will play a crucial role in knowledge sharing, benchmarking, and testing responses to detect, mitigate, and respond to illegal and harmful content that is synthetic or AI-generated. Several safety tech firms have also taken part in these challenges. Partnership working will be required to tackle deepfakes across a range of use cases and mediums, recognising that there is a continual ‘arms race’ in tackling illegal and harmful AI generated content, and that AI is also a benefit in helping to identify and tackle this content at scale.
-
Compliance, costs and complexity: As regulatory demands grow, platforms may be expected to face increasing compliance costs and complexity. The ongoing trend for “Safety Tech as a Service” (STaaS) may enable accessible moderation, compliance frameworks, and certification services tailored to certain platforms or SMEs. The availability of affordable, easy-to-integrate compliance solutions will be anticipated to expand adoption, especially among smaller enterprises and platforms that require external support with moderation and compliance.
-
The role of government and public investment: Policy and the role of public buyers and investment will continue to remain a key driver in shaping the safety tech sector. The UK government has been actively enhancing its approach to AI safety and security. In February 2025, the AI Safety Institute was rebranded as the AI Security Institute to better reflect its expanded focus on addressing AI-related risks to national security, fraud, and the misuse of AI. It will work closely with government partners, including the Defence Science and Technology Laboratory (DSTL), the Laboratory for AI Security Research (LASR), and will launch a ‘new criminal misuse team which will work jointly with the Home Office to conduct research on a range of crime and security issues which threaten to harm British citizens.’
Factors slowing recent growth:
-
A maturing market: As Section 5 demonstrates, the UK safety tech market shows signs of maturation, with annual growth rates moderating from approximately 29% (the compound growth rate between 2020 and 2024) to 13% this year. This shift reflects a natural evolution rather than decline—as markets mature, growth rates typically stabilise while absolute growth continues. The sector now exhibits characteristics of maturity including increased competition, greater company scale, and more established market reach. Additionally, larger firms in the sector are increasingly influenced by the steady demands of major customers, which is expected to contribute to more softened growth patterns similar to this year.
-
Regulatory uncertainty and delayed spending: While the Online Safety Act has passed, enforcement will be rolled out in phases, which may cause some platforms to ‘wait and see’ or delay their investments until they have full clarity on compliance requirements. Ofcom recognises that there is a ‘job of work to do’ to ensure firms are compliant. Firms will be expected to demonstrate sufficient compliance; however, some smaller sites and forums may determine that the costs or liabilities of compliance may be too high. For example, Microcosm (a non-profit forum platform) announced it would shut down in March 2025.
Further, some major platforms are reviewing their approaches to trust and safety in different ways. Whilst large platforms are expected to comply with the OSA, they may proactively invest in some areas; but also reduce expenditure in other areas. For example, areas such as age assurance and protecting children online have becoming increasingly embedded (e.g. Yoti’s partnership with Meta for age verification). However, platforms such as Meta have also revised their approach to fact-checking – ending third-party fact checking in the United States initially, and moving to a Community Notes model. Globally, this may mean that some platforms may wait and see how legislation and standards either align or vary globally and deploy safety technologies consistently or contextually across jurisdictions.
- AI and Automation in Online Safety Processes: The shift towards AI-driven content moderation is reshaping the landscape. For example, in October 2024, TikTok announced it would enhance moderation operations through AI, resulting in hundreds of human layoffs. According to ByteDance, the “company expects to invest $2 billion globally in trust and safety this year and will continue to improve efficiency, with 80% of guidelines-violating content now removed by automated technologies.”
This represents a significant shift towards algorithmic content management whilst reducing human oversight. Multiple other major platforms are following similar trajectories, implementing more sophisticated AI moderation systems that require less human intervention. This is being accelerated by both efficiency requirements and the need to process content volumes that would be challenging for human moderation teams.
This may also place downward pressure on staffing levels within outsourced trust and safety teams, as reflected in Section 5. Safety tech firms may also re-examine pricing and go-to-market approaches, to ensure that their technologies (e.g. moderation APIs) are competitive, accessible, efficient – but also are accurate, explainable, and impactful.
- Data access challenges and restrictions: Access to relevant data remains a challenge for safety tech providers. For example, many large tech platforms have changed access to their APIs (such as X), previously used extensively by safety tech firms and academic researchers such as Block Party. This can impact the training and performance of moderation algorithms dependent on novel data streams from major social platforms.
Further, social media platform data is also increasingly viewed as a competitive advantage, particularly when used for LLM training. Content may therefore become available at a higher cost for LLM training (e.g. OpenAI and Reddit’s partnership), but may exclude smaller entities from access.
Beyond social media, access to high-quality biometric datasets, such as diverse facial images and voice recordings, poses another challenge for firms. These datasets are essential for developing technologies aimed at identity verification and deepfake detection. However, access is often limited to entities like government or large businesses that may collect user data under strict privacy terms. For instance, facial recognition databases might be available to law enforcement but not to independent researchers or small startups without the resources to negotiate access.
- Open-source and in-house solutions mitigating third-party demand: The emergence of open-source tools may also shape market dynamics for commercial providers. For example, in February 2025, ROOST was announced (Robust Open Online Safety Tools), supported by Google, OpenAI, Discord, Roblox and others, to provide freely available, open source building blocks to safeguard global users and communities. This will enable small platforms and startups to implement moderation solutions without incurring substantial costs.
Projects like the OECD’s Tools and Metrics for Trustworthy AI have hundreds of open-source tools to support tackle risks and harms. The open access of safety technologies through open-source initiatives may help to reduce implementation costs for platforms. However, given the complexity and range of online harms, there remains a strong role for commercial and managed support.
Trust and Safety Procurement
A keyword search of trust and safety language on the Tussell procurement platform has been used to provide an indicative view of public sector demand for safety tech products and services across government, alongside commissioned research projects.
We find increased demand for safety tech solutions across the public sector, as shown below. We note that annual data may have a lag, where authorities report awards retrospectively, and that individual years may demonstrate variance where a ‘one-off’ large contract is tendered. However, 2024 data suggest an uptick in demand for safety tech solutions, with respect to value and volume (£48 million and 122 respectively).
A review of 2024 data suggests ongoing demand for digital forensics within police forces (14 contracts), user research and population surveys exploring online safety and media literacy (18 contracts issued by Ofcom), and support with web filtering and social media moderation within education, NHS and local authorities.
Figure 5: Safety Tech Public Procurement Spend (2019 – 2024)
Source: Perspective Economics analysis of Tussell data (2019 - 2024)
Key Buyers and Partnerships
Within this research, we have reviewed company web data (websites and press releases) to explore how the safety tech sector engages with wider industry and partners. This is also explored in DSIT’s recent research exploring Technology and Trust and Safety, which suggests that external adoption of safety tech solutions is growing, driven by regulation, increasing standards and transparency.
In March 2025, we identified 102 safety tech providers that mentioned 872 customers or partnerships with other businesses or organisations. Overall, this suggests that approximately three-quarters of providers (with a known website) publicly mention indicative customers or partnerships in the market.
For the remaining safety tech businesses that did not have an identified partnership online, these were typically focused on the domains of detecting and responding to illegal content, digital forensics, and OSINT. For many of these businesses, they mention working with law enforcement or government clients, but do not disclose these partnerships due to commercial sensitivities.
For the safety tech businesses that do mention partnerships or customers directly, we find an average of 8 partnerships per business (median of 7). We classify the count of these partnerships by safety tech taxonomy category (best-fit), and the sector of the buyer or partner (classified based on core offering).
Table 3: Partnerships Identified between Safety Tech providers and wider markets
| Taxonomy | Count | Percentage | Key Findings |
|---|---|---|---|
| Platform level | 262 | 30% | The data continues to highlight that safety tech providers that focus on areas such as content moderation or brand safety are more likely to highlight these partnerships and how their solutions can support vendors achieve a positive ROI through investing in trust and safety. Review of web data highlights that several providers use testimonials from vendors to articulate these benefits - for example, addressing toxicity at scale, or maximising user engagement on platforms. Typically, customers include a mix of social media and tech firms but also highlights the reach and usage of platform level solutions by recognised brands across consumer-facing sectors. |
| Information governance | 108 | 12% | The data highlights a range of partnerships between safety tech firms focused on tackling disinformation, and organisations focused on advertising, marketing, media and branding e.g. to help verify content and provide trust and assurance. |
| Age orientated online safety | 132 | 15% | Ensuring adequate provision for age-checking and user verification is a key component of online safety, as well as regulatory compliance in areas such as retail and finance. The data finds significant evidence of safety tech providers supporting a wide range of firms to comply with privacy regulation, as well as innovative forms of user verification e.g. Yoti’s partnership with Meta to verify age on Instagram. |
| Online safety professional services | 94 | 11% | As the adoption of trust and safety has grown, this has also led to an increase in associated professional advisory support. These organisations can advise on how to build platforms with safety by design, or how they can use safety technology within their existing platform or enhance user safeguarding. |
| System-wide governance | 77 | 9% | Online platforms have an obligation to ensure that they can detect, mitigate, report, and respond to illegal content or behaviour. The review of UK safety tech providers highlights strong customer relationships between UK providers and law enforcement bodies. However, it also highlights major retailers and brands working with providers, particularly to verify transactions and counter fraud. |
| User protection | 115 | 13% | This category includes providers typically focusing on endpoint protection on a Business to Consumer (B2C) basis e.g. safety software that can be installed on a device. The research finds safety tech providers working with schools and education settings, in addition to some partnerships with Internet Service Providers (ISPs) and search engines. |
| Network filtering | 84 | 10% | We find some partnerships between filtering providers and schools, and with brands with open WiFi (e.g. retailers, airports, and coffee shops). |
| Total | 872 | 100% |
Figure 6 highlights the count of partnerships identified between providers and wider sectors. A higher figure is denoted in a darker colour, and sets out where there appears to be market demand for particular solutions. For example, there is a strong overlap between:
-
Advertising and marketing firms engaging with information governance safety tech firms to help ensure brand safety, and prevent inadvertent sharing of harmful or hateful content.
-
Banking and finance firms engaging with age assurance providers (aligned to digital identity) and platform level firms (to support with moderation in customer processes)
-
Large retail, technology, and food and drink brands engaging with platform level providers to support brand safety operations
-
Education providers engaging with network filtering and endpoint protection products and services
-
Government and telecommunications providers working closely with system-wide providers to help identify and respond to illegal harms.
Figure 6: Mapping Customers between Safety Tech Providers and the wider economy
| Classification (Summary) | Platform Level | User Protection | Online Safety Professional Services | System-wide Governance | Information Governance | Age Orientated Online Safety | Network Filtering |
|---|---|---|---|---|---|---|---|
| Advertising and Marketing | 1 | 5 | 1 | 1 | 0 | 0 | 0 |
| Banking and Finance | 27 | 3 | 4 | 6 | 13 | 32 | 0 |
| Business Services | 4 | 9 | 11 | 11 | 7 | 8 | 1 |
| Charity | 12 | 3 | 6 | 1 | 6 | 1 | 1 |
| Online Safety | 0 | 0 | 3 | 1 | 0 | 1 | 0 |
| Government | 16 | 2 | 21 | 20 | 8 | 5 | 4 |
| Education | 3 | 64 | 20 | 3 | 3 | 9 | 61 |
| Energy and Utilities | 7 | 1 | 0 | 1 | 7 | 3 | 0 |
| Food and Drink | 22 | 2 | 2 | 2 | 8 | 1 | 3 |
| Gaming | 12 | 2 | 0 | 1 | 2 | 11 | 0 |
| Health | 19 | 4 | 4 | 5 | 8 | 3 | 2 |
| Media | 22 | 1 | 1 | 0 | 7 | 2 | 0 |
| Technology | 29 | 3 | 7 | 15 | 6 | 15 | 2 |
| Retail | 30 | 4 | 2 | 2 | 12 | 16 | 2 |
| Sports | 11 | 0 | 8 | 2 | 10 | 7 | 2 |
| Telecoms | 10 | 1 | 2 | 1 | 0 | 5 | 4 |
| Other | 33 | 7 | 2 | 5 | 8 | 8 | 2 |
n = 856 customers identified between safety tech providers and wider organisations via web data
Appendix A: methodology
This section sets out our research methodology for identifying safety tech businesses, and capturing their respective economic contribution. This methodology is consistent with the previous reports.
Stage 1: data review
The research team recognise that safety tech is a challenging sector to define, and will contain businesses which overlap a number of other sub-domains and classifications. To support the development of a sector definition, the research team used the initial definition and sector taxonomy used within the previous studies. This was subsequently subject to minor revisions, and tested through stakeholder workshops to inform a 2023 definition and market taxonomy. This definition has been used in this report.
This helped to:
-
consider any new language or business models used by safety tech providers (e.g. such as public safety, content provenance, brand safety)
-
Identify any companies not previously identified, or recent start-ups in safety tech
-
Identify where any platforms are integrating trust and safety solutions, which can have market implications
The project team also reviewed all known dedicated safety tech firms, excluding any that have ceased trading, as well as over 700 global safety tech businesses in order to explore any international firms registering in the UK. The purpose of this step was to identify any new keywords, companies, or organisations relevant to the safety tech domain, prior to the revised taxonomy and firm identification process.
At this initial stage, the project team identified as many potential input sources or pre-existing lists of firms classified as relevant to a number of agreed high-level terms e.g. content moderation, digital forensics, Violence Against Women and Girls (VAWG), brand safety etc. This supported the project team to build upon the previous year’s list, as well as the inclusion of additional providers that fit into the revised taxonomy. Perspective Economics holds a number of datasets and licence agreements to help identify relevant businesses. Analysis was undertaken with Companies House data, Beauhurst, Tussell, web data, and FDI Markets to help identify firms for potential inclusion in the sectoral modelling. The long-list was subsequently reviewed to provide:
-
Classify safety tech firms into market categories based on web descriptions (undertaken using a bespoke classifier, and subject to manual review).
-
Markers for each of the companies identified to date to highlight broader validation for inclusion within the final safety tech sector list. For example, if a firm has a clear description where they are developing safety tech products, has a team of engineers and data scientists, and has secured external funding or validation – this would be a strong candidate for inclusion. If a firm is only identified in a small number of sources, and / or has a more limited aligned definition – this would be subject to a manual review.
Stage 2: taxonomy development
In 2019, Perspective Economics designed the first safety tech sector taxonomy, in collaboration with DCMS and industry stakeholders. This recognised a need to set out the core products and services offered, as well as the technology, harms addressed, approach, and benefit to end users. In 2023, Perspective Economics engaged with OSTIA and DSIT to provide an update to the existing taxonomy (see Appendix B)
Stage 3: identification of firms and initial modelling
Using the updated taxonomy and keywords, the project team searched for additional firms to include within this study. All company data was enhanced using a blend of proprietary sources, web data (using identification algorithms), and Companies House. Sector modelling included an assessment of company accounts, investment data, and estimated team sizes to identify an initial assessment of how the safety tech sector has grown, and helped to identify any gaps for subsequent primary research (consultations).
Stage 4: primary research
Perspective Economics and PUBLIC engaged with over 15 consultees via qualitative interviews and surveys. Consultations included industry, investors, and wider stakeholders in the UK and internationally. Themes consisted of:
-
Discussion of the organisation’s engagement with online safety / safety tech / trust and safety or AI
-
Exploration of relevant themes (growth of the sector, challenges for the adoption of safety tech, where intervention or support might be most appropriate, drivers for growth, potential impact of regulation on the safety tech industry etc). Firms were also asked thematically about AI safety, generative AI, and metaverse (VR/AR technologies) in 2025.
-
Key barriers to business development and growth.
-
Identification of business partnerships or supply chain activities, or engagement with R&D.
Appendix B: taxonomy update
Definition context and considerations
The Department for Digital, Culture, Media and Sport (DCMS) (now DSIT) published the Safer Technology Safer Users report in May 2020. This research piece was conducted by Perspective Economics with advisory input from Professor Mary Aiken and Professor Julia Davidson and provides an outline of the online safety technology or safety tech sector profile in the UK.
A number of steps were taken within this original study to define the scope of the safety tech sector. Factors considered include:
-
the technical response utilised to reduce harm, such as detection and removal of illegal and harmful content, age-appropriate design or age-based safeguarding, detection of disinformation, and or content filtering etc)
-
the type of harm involved that solutions seek to address, (such as illegal video and image-based content, hate speech, child exploitation, sexual material, personal harm, violence, bullying and harassment etc.)
-
the type and extent of the risk involved: for example, identifying solutions that can detect and notify platforms and law enforcement about high risk behaviours online.
-
those at most risk of harm, such as children and young people, those vulnerable to grooming or radicalisation, or those who may not be aware they are exposed to harm e.g., disinformation or deep fakes etc.); and
-
the technologies and approaches deployed to counter the harm (for example, understanding the technical approaches deployed, for example risk detection and response through artificial intelligence (AI) or machine learning (ML) approaches.
For the purposes of this research, we retain the following definition of the safety tech sector:
Any organisation involved in developing technology or solutions to facilitate safer online experiences, and to protect users from harmful content, contact or conduct.
When considering ‘online safety’ in the broadest sense, the risks can include:
-
Content: being exposed to illegal, inappropriate, or harmful material;
-
Contact: being subjected to harmful online interaction with other users; and
-
Conduct: personal online behaviour that increases the likelihood of, or causes, harm.
As such, this research seeks to identify organisations that provide or implement technical products or solutions that either help to:
-
Protect users from social harms when using technology and online platforms or services (typically through filtering or controls, or through detection and removal of potentially harmful content); or
-
Provide mechanisms to flag, moderate, or intervene in the event of illegal or harmful incidents when using online platforms or services.
For avoidance of doubt, this research seeks to identify and understand organisations that:
-
Help trace, locate and facilitate the removal of illegal content online;
-
Work with social media, gaming, and content providers to identify harmful behaviour within their platforms;
-
Monitor, detect and share online harm threats with industry and law enforcement in real-time;
-
Develop trusted online platforms that are age-appropriate and provide parental reassurance for when children are online;
-
Identify and counter fraudulent advertising and scams online;
-
Support individuals in maintaining and limiting access to personal data, or respond to incidents of doxxing or revenge porn
-
Verify and assure the age of users;
-
Actively identify and respond to instances of online harm, bullying, harassment and abuse;
-
Filter, block and flag harmful content at a network or device level;
-
Detect and disrupt false, misleading, or harmful narratives; and
-
Advise and support a community of moderators to identify and remove harmful content.
As reflected in this list, given the scale of online harms that exist, there are a wide range of technical and applied solutions that can help counter these, and our research seeks to identify the scale and breadth of approaches that exist in the market.
Taxonomy
System-wide level factors
This refers to organisations involved at the highest levels, to trace, locate and remove (or help to facilitate the removal) of illegal content online.
These organisations help to identify and tackle some of the internet’s most harmful content e.g., child sexual abuse and exploitation, terrorist content. This can be achieved through:
-
Working closely with law enforcement to assist with investigative capabilities e.g., use of forensic science tools to scan and detect known illegal content using MD5 hashes, or use of AI and tools to identify patterns of harmful behaviour online;
-
Maintaining and providing access to technology aimed at preventing the upload, and facilitating the removal of illegal content e.g., the IWF’s Hash List (with Microsoft PhotoDNA); and
-
Combating abuse or threats with automated content analysis and artificial intelligence e.g., automated detection of terrorist content, including previously unseen material.
Please note that the proposed updates at the system level also consider the harms associated with fraudulent online advertisements and commerce.
Platform level factors
This refers to organisations that are involved in making online services safer and typically work at the platform level i.e., work alongside social media, gaming, and content providers to improve safety and behaviour within their platforms. These have been segmented into the sub-categories, outlined in the subsections below.
Platform governance
These organisations are focused upon helping providers of online content to govern their offering with respect to illegal content. Whilst there is some overlap with ‘System Governance’, this is more focused on organisations that help to tackle issues such as:
-
Embedding prevention mechanisms e.g., using machine learning to prohibit the production of indecent underage imagery on social media platforms;
-
Identifying and blocking harmful images and videos in real-time; and
-
Identifying child abuse or grooming in conversations.
Platform moderation and monitoring
These organisations also help providers of online content to monitor and moderate behaviour and content posted within their platforms. This is typically focused upon reducing harmful content or behaviour e.g., offensive language, bullying, or toxic content, fraudulent online advertisements and commerce. This can include:
-
Moderation and monitoring of content e.g., pre-moderation or post-moderation of content, undertaken by automated content analysis and / or humans;
-
Chat moderation e.g., identifying and removing users subject to language or words used; and
-
Behavioural Monitoring e.g., identifying good and bad behaviour, typically using Natural Language Processing within online communities.
Age orientated online safety
These organisations seek to support online content providers in ensuring that their platforms are either:
-
Age-appropriate and increase the privacy of children online (e.g., compliant with GDPR UK, or that the content and access requirements are suitable should the website or app be targeted at under-18s, thereby ensuring ‘safety by design’), or provide
-
Age assurance services (i.e., help companies to validate and confirm that only particular age groups can access particular content).
Endpoint level factors
This refers to organisations that provide products or services that help to ensure that the device being utilised by the end-user is suitably secure with respect to online safety. This focuses upon online safety solutions (i.e., ensuring that the user’s risks with respect to content, conduct, and contact are reduced). It does not include endpoint protection from viruses, malware, or adware – which are covered by ‘cyber security’. User protection at the endpoint level can be segmented into 2 main categories.
User initiated protection (user, parental and device-based)
This includes organisations that provide products or services that can be installed on devices to help secure the end-user from online harms (typically a parent or guardian installing on behalf of a child). The underlying ambition is to create a safer online experience for the user e.g., through safeguarding assistants, oversight of social media content, or through monitoring of a child’s digital or online behaviour and interaction with other users. Where deployed, these solutions can help to prevent issues relating to sexting, grooming, bullying, harassment, abuse, or aggression. Solutions may also support users in promoting awareness of personal data rights and / or preventing public access to private information (where consent is not provided) e.g. supporting those affected by ‘revenge porn’
Network filtering
This includes organisations involved in providing products or services that actively filter content (e.g., through white-listing or black-listing, or through actively blocking content perceived to be harmful or illegal). This can often include solutions provided to schools or home users to filter content for users.
Information environment level factors
This refers to organisations that actively detect and disrupt false, misleading and / or harmful narratives.
Information governance
This includes tackling misinformation and disinformation through the provision of fact checking and disinformation research and disruption. Organisations within this space seek to ensure citizen information accuracy and facilitate trust in the information environment and wider society.
Online safety professional services
This includes organisations typically involved in supporting the design, implementation and testing of online safety through the provision of compliance services, research, frameworks and methodologies for auditing, evaluating or mitigating potential harms, and help to enable the development of safer online communities.
Further, this analysis has also sought to identify organisations involved in supporting the development and scaling of online safety products and services but do so in an advocacy capacity e.g. civil society organisations.
-
OpenOrigins secures $4.5 million to combat AI deepfakes with blockchain tech ↩
-
Refute secures £2.3 million to protect businesses from coordinated disinformation campaigns ↩
-
Welsh AI start-up, Nisien.AI to accelerate mission to make online world safer after Investment Fund for Wales funding ↩
-
Egregious lands $1 million pre-seed funding to tackle AI-driven disinformation ↩
-
Hinge - Moderating toxic ad comments decreases customer acquisition costs by 19% ↩
-
Fulham FC - Moderating spam and hate comments increases engagement by up to 27% ↩