Guidance

AI Security Institute – Frontier AI Trends report factsheet

Published 18 December 2025

What is the AI Security Institute’s Frontier AI Trends Report? 

The Frontier AI Trends Report is the AI Security Institute’s first public overview of how the most advanced AI systems are evolving. It brings together 2 years of government-led testing of leading AI models, identifying major trends in areas such as cyber security, biology and chemistry assistance, autonomous behaviour, safeguards, and human influence. It is an accessible, evidence‑based summary designed to cut through hype and misinformation by providing a clear, authoritative picture of what frontier AI systems can and cannot currently do. 

Why is the government publishing this report?

The government is publishing this report to strengthen transparency, improve public understanding and shape responsible debate about fast‑moving AI capabilities. By sharing tested evidence, the government aims to build trust, promote openness and highlight the world‑leading technical work carried out by the UK’s AI Security Institute. 

How will the government use this report?

The report provides a shared evidence base for government , industry and researchers. It will be used to inform policy decisions, guide collaboration with AI developers and help ensure the UK stays ahead of fast‑changing risks. Findings from AISI’s evaluations feed into day‑to‑day engagement with AI companies, national security partners and international counterparts. This work strengthens safeguards in real systems, helps companies make risk‑aware deployment decisions, and provides the public with visible proof of the UK’s commitment to responsible AI development. 

How is the government addressing possible risks associated with advancing capabilities?

The government is taking a long‑term, science‑led approach to managing emerging AI risks. This includes preparing for the possibility of rapid transformative impacts of AI systems on society and national security.  

Key actions include:  

  • Continuous evaluation: AISI conducts independent testing of leading AI systems, before and after deployment, to identify vulnerabilities in safeguards and highlight where models may introduce security risks. 

  • Close collaboration with developers: Findings from AISI testing are used to strengthen model safeguards in partnership with AI companies, improving safety in areas such as cyber tasks and agent‑based behaviour. 

  • Strong national security coordination: government experts, including the National Cyber Security Centre (NCSC) and defence science agencies, work closely with AISI to monitor developments, address cyber threats and build resilience 

  • Targeted research investment: Through programmes such as the Alignment Project, the government funds research to understand and mitigate novel risks such as unintended autonomous behaviour or potential loss of control. 

  • Context‑based regulatory approach: Regulators are empowered to respond flexibly to new capabilities, focusing on how AI is used in real‑world contexts rather than adopting rigid, one‑size‑fits‑all rules.  

  • International leadership: The UK works with global partners to raise security standards, share scientific insights and shape responsible norms for frontier AI

Together, these measures ensure the UK can unlock AI’s benefits while maintaining a clear-eyed, evidence‑driven approach to security and public safety.