Case study

Keeping police one step ahead of criminals using AI

ACE developed a portfolio of in-depth reports to increase understanding of how AI products in four key areas could be used by criminals.

The Public Safety Group (PSG), part of the Home Office, commissioned the Accelerated Capability Environment (ACE) to increase understanding across government of AI products, especially those based on Generative AI (GenAI), that are being released onto the market and how these could be used by criminals. 

GenAI is already accelerating crime types including fraud, child sexual abuse material, abuse through non-consensual intimate images (NCII) and disinformation. In future these threats could grow and new ones emerge. AI’s potential to provide instructions and assistance on how to carry out a wide range of crime types is also cause for significant concern. 

PSG wanted to better understand the GenAI sector and the specific risks it poses to crime both now and in the future, and to approach this problem by analysing AI products and the risks they pose, rather than waiting to observe criminal activity.  ACE’s expert analysts were ideally suited to help carry out this work. 

ACE started by creating a capability map and a baselining understanding of publicly available products and markets in four areas. These were: 

  • image and video generators, including so-called “nudification” apps, which are used to create synthetic and deepfake NCII

  • chatbots based on large language models, that can be misused for criminal or malicious purposes. 

  • voice cloners, with a focus on how they are being used in areas such as fraud. 

  • data and predictive analytics tools, which can be misused to identify potential victims in large datasets as well as for personalisation and social engineering. 

ACE rapidly researched and delivered an initial baseline report, as well as individual deeper dives in the above areas. This pace was important because it enabled PSG to get ahead of a fast moving area of government policy. 

This portfolio provided a comprehensive and up-to-date understanding of AI products relevant to criminal activities.

It identified key AI products on the market (from both large and small providers), their associated risks/threats, and advances in best practice safety measures implemented by companies to prevent crime, with the aim of strengthening situational awareness and better informing policy-making. 

A second request was for ongoing horizon scanning, to produce a monthly newsletter exploring new AI products being released and how they might be used by criminals, creating content and analysis specific to policing, crime and AI. This is circulated to over 350 people across the policing and law enforcement community. 

As the UK develops an AI safety approach based on regulation, testing and voluntary regimes, and policing is confronted with increasing levels of AI-enabled crime, maintaining an understanding of developments in the AI sector will be critical to mitigating this growing threat.  

Updates to this page

Published 7 October 2025