AI action plan for justice
Published 31 July 2025
Foreword by Lord Timpson
The Prime Minister, the Lord Chancellor, and I are committed to creating a more productive and agile state - one in which AI and technology drive better, faster, and more efficient public services.
That is why I am delighted to introduce the AI Action Plan for Justice - a first-of-its-kind document outlining how we will harness the power of AI to transform the public’s experience, making their interactions with the justice system simpler, faster, and more tailored to their needs.
This plan focuses on three priorities: strengthening our foundations, embedding AI across justice services, and investing in the people who will deliver this transformation. It aligns with the Prime Minister’s vision to build digital and AI capability across government and supports our departmental priority of delivering swift access to justice.
Since joining the Ministry of Justice (MOJ) in July 2024, I have seen real opportunities for AI to improve the working lives of our frontline staff and colleagues - and clear evidence of where it is already making a difference.
I am proud to represent a department that is fundamentally rethinking its use of technology to improve outcomes for the public and contribute to wider economic growth.
I will continue to champion our ambition for the MOJ to lead the way in responsible and impactful AI adoption across government.
This plan marks a crucial first step in delivering that ambition.
James Timpson
Minister for Prisons, Probation and Reducing Reoffending
Lead Ministry of Justice (MOJ) Minister for AI

Executive summary
Artificial Intelligence (AI) has the potential to transform our justice system in England & Wales and deliver our ministerial priorities. AI shows great potential to help deliver swifter, fairer, and more accessible justice for all - reducing court backlogs, increasing prison capacity and improving rehabilitation outcomes as well as victim services. But this opportunity must be seized responsibly, ensuring that public trust, human rights, and the rule of law remain central and AI risks are carefully managed.
This AI Action Plan for Justice sets out the Ministry of Justice’s approach to responsible and proportionate AI adoption across courts, tribunals, prisons, probation and supporting services (referred to here as the justice system). It has been developed in consultation with the independent judiciary and legal services regulators and we will implement it in collaboration with our wider justice sector partners such as the Home Office, the Crown Prosecution Service and our trade unions. It complements wider government efforts to safely modernise public services and builds on the UK’s global strengths in legal services, data science, and AI innovation.
We will focus on three strategic priorities:
1. Strengthen our foundations
We will enhance AI leadership, governance, ethics, data, digital infrastructure and commercial frameworks. A dedicated Justice AI Unit led by our Chief AI Officer will coordinate the delivery of the Plan, with critical input from our Data Science, Digital and Transformation teams. A cross-departmental AI Steering Group provides oversight and an AI and Data Ethics Framework, and communications plan will promote transparency and engagement.
2. Embed AI across the justice system
We will deliver more effective services across citizen-facing, operational and enabling functions alike. By applying a “Scan, Pilot, Scale” approach, we will target high-impact use cases. These include:
- Reducing administrative burden with secure AI productivity tools including search, speech and document processing (e.g. transcription tools that allow probation officers to focus on higher-value work).
- Increasing capacity through better scheduling (e.g. prison capacity).
- Improving access to justice with citizen-facing assistants (e.g. enhancing case handling and service delivery in our call centres).
- Enabling personalised education and rehabilitation (e.g. tailored training for our workforce and offenders).
- Supporting better decisions through predictive and risk-assessment models (e.g. predicting the risk of violence in custody).
3. Invest in our people and partners
We will invest in talent, training and proactive workforce planning to accelerate AI adoption and transform how we work. We will also strengthen our partnerships with legal service providers and regulators to support AI-driven legal innovation and with our criminal justice partners on our collective response to AI-enabled criminality.
We are ready to deliver. AI rollout is already underway with encouraging early results. Initial funding is secured, with additional backing anticipated as we demonstrate impact. As AI technologies mature, we will refine our approach and plan based on real-world outcomes, evaluation, and feedback from staff, trade unions, partners, and the public. We are committed to acting boldly, learning rapidly, and ensuring AI adoption delivers real improvements. Together, these priorities will ensure AI is embedded in our services and transformation programmes, supported by the right foundations, and driven by a productive and agile workforce.
Image description: the image summarises the AI Action Plan for Justice, setting out the goal, benefits, three strategic priorities, cross-cutting principles and core AI products.
The opportunity
AI holds tremendous promise for addressing longstanding challenges in the justice system. Despite being a cornerstone of democracy, the English & Welsh justice system faces mounting pressure from limited access to legal services, courts struggling with high caseloads, and persistent issues in our prisons. AI’s capacity to learn from data, handle complexity, scale effortlessly, and converse naturally presents an unprecedented opportunity to enhance human judgement and tackle these deep-rooted challenges. Through the thoughtful integration of AI solutions, we have the potential to:
- Free up professional time by automating routine tasks and paperwork, allowing staff to focus on meaningful interactions and deliver justice with greater empathy and care.
- Increase system capacity through smarter scheduling of people, spaces, and resources to support prison capacity planning and court backlog reduction.
- Enable a more personalised and preventative justice system by supporting tailored rehabilitation plans or personalising victim services.
- Enable more equitable legal pathways through citizen-facing AI tools that help people navigate legal processes, resolve disputes without going to court or tribunal, or claim compensation for criminal injuries.
- Improve decision making with advanced predictive tools for risk assessments, case prioritisation, and operational planning to protect the public.
- Support economic growth by supporting the UK’s world-leading legal and LawTech sectors and creating high-value, future-ready jobs.
- Build system resilience by helping justice agencies, regulators and wider stakeholders respond to emerging threats such as AI-enabled criminality.
We envision a justice system that personalises citizen engagement, equips humans to make timely, well-informed decisions for each individual, streamlines operations to match demand, and supports a workforce engaged in meaningful work. It requires strong principles and proactive risk management in relation to ethics, data and model quality, security, privacy, supplier oversight and unforeseen operational impact, but done well, AI has transformative potential for the sector.
The Ministry of Justice (MOJ) is particularly well-placed to lead this transformation, with promising building blocks for success already in place:
- Maturing data capabilities: Our data teams have advanced significantly, delivering open-source tools like Splink and launching programmes such as BOLD and Data First, which link cross-government data to provide a fuller picture of individual journeys and richer sources for AI use.
- A thriving legal services sector: The UK is home to Europe’s largest legal services market, with 44% of Europe’s LawTech startups and a £37 billion contribution to the economy. This provides an opportunity to position us as a leader in AI-driven legal innovation.
- A forward-thinking legal framework: The Online Procedure Rule Committee (OPRC), the MOJ and the judiciary are working to improve access to justice for all by harnessing the power of modern digital technology, such as AI, in the pre-action space, across the civil, family and tribunal jurisdictions.
- Access to world-leading AI expertise: World-class AI organisations operate in the UK, including Google DeepMind, OpenAI, Anthropic, Microsoft, Scale AI and Meta AI, supported by a growing ecosystem of AI and legal technology companies. This allows the MOJ to collaborate with top-tier experts and resources to drive forward sector growth.
Our AI Action Plan for Justice
To fully realise AI’s potential, this plan is structured around three strategic priorities, which we aim to deliver over a 3-year period subject to funding:
- Strengthen our foundations: We will strengthen AI leadership, governance & ethics, data & digital infrastructure and procurement to support AI adoption and proactively manage risks.
- Embed AI across the justice system: We will apply AI to deliver our priority outcomes: protect the public, reduce reoffending, deliver swift access to justice and provide efficient enabling functions through a clear “Scan, Pilot, Scale” approach.
- Invest in our people and partners: We will invest in talent, training and workforce planning to accelerate AI adoption and transform how we work. We also plan to strengthen our partnerships to support AI-driven legal innovation, economic growth and our collective response to AI-enabled crime.
Across these priorities, we will align to cross-cutting principles:
- Put safety and fairness first: AI in justice must work within the law, protect individual rights, and maintain public trust. This requires rigorous testing, clear accountability, and careful oversight, especially where decisions affect liberty, safety, or individual rights.
- Protect independence: AI should support, not substitute, human judgment. We will preserve the independence of judges, prosecutors, and oversight bodies, ensuring AI works within the law and reinforces public confidence.
- Start with the people who use the system: We will design AI tools around the needs of users, e.g. victims, offenders, staff, judges and citizens. That means solving real problems, co-developing solutions with users, and localising services to reflect the diverse realities of justice.
- Build or buy once, use many times: Build common solutions that can be used across the system where possible, reducing cost and duplicated effort.
This Action Plan builds on the excellent work produced by government teams and partners in recent years. It complements and aligns with:
- The AI Opportunities Action Plan and the government response to it which lay out a pro-innovation and pro-safety roadmap for AI across government.
- The AI Playbook (previously Generative AI Framework for Government) and existing guidelines on safe, responsible AI use in the public sector.
- The State of Digital Government Review and Blueprint for Digital Government, which acknowledge the technological and cultural foundations needed to modernise public services.
- Collaborations led by Incubator for Artificial Intelligence, 10DS (Number 10 Data Science), GDS, AI Security Institute, Sovereign AI Unit and other specialist teams exploring AI applications for public good.
It is important to define what we mean by AI to ensure a shared understanding of its role. Throughout this Action Plan, “AI” is used as an umbrella term for tools including machine-learning, large-language models and emerging agentic capabilities; tools that enable machines to process data, make inferences, learn and provide recommendations traditionally requiring human intelligence.
1. Strengthen our foundations
To transform our services and become the best state partner for AI-driven innovation, we must lay strong foundations. This requires strategic focus on leadership, governance, ethics, data & digital infrastructure and procurement. Specifically, we will:
1.1 Strengthen AI leadership and establish a dedicated Justice AI Unit
We have appointed a Chief AI Officer to provide strategic leadership and accelerate responsible AI adoption across the justice system. Bringing extensive experience from both private-sector AI development and government innovation, the Chief AI Officer will help attract top talent, build high-performing interdisciplinary teams, and ensure ethical, effective, and accountable use of AI.
To support this, we have established the Justice AI Unit, an interdisciplinary team comprising experts in AI, ethics, policy, design, operations, and change management. It will coordinate the delivery of the Plan, with critical input from our Data Science, Digital and Transformation teams. We will embed AI solutions securely and collaboratively across our department and agencies, ensuring we maximise AI’s potential while maintaining public trust and transparency.
As part of our AI leadership approach, we are working closely with the independent judiciary, which has published its own guidance on AI. HM Courts & Tribunals Service (HMCTS) and the Judicial Office are key partners in this collaboration, ensuring that judicial needs and values are reflected in the responsible deployment of AI. We are also working closely with the Online Procedure Rule Committee (OPRC) to ensure that AI-enabled digital justice services are coherent, user-centred and legally robust.
More broadly, we will actively address AI-enabled criminality by collaborating across the justice sector through joint research and working groups focused on emerging threats.
1.2 Implement robust governance to steer and track AI progress
Robust governance is essential to ensure AI solutions are safe, transparent, lawful and aligned with our ethical commitments and strategic priorities. We have established an AI Steering Group that brings together senior leaders from across the Ministry of Justice, including policy, data, digital, security, people, legal, HM Prison & Probation Service (HMPPS), HMCTS, risk, and communications, to oversee AI initiatives and manage risks.
The Steering Group sets cross-cutting ethical, security, and operational standards; manages the departmental AI Risk Register; and provides a regular forum for reviewing progress and resolving issues. It works closely with internal and cross-government bodies, including the MOJ’s Information Security and Risk Board, the Executive Committee, and the Government AI Security Committee, to embed safe and effective AI use across the justice system.
In addition, HMCTS has established tailored governance and assurance processes to ensure any use of AI in courts and tribunals is appropriate, safe and controlled, and upholds the independence of the judiciary. This includes a Responsible AI Framework with clear principles to prevent unfair bias, protect legal rights, and support fair outcomes, while enabling transparency and accountability in how AI is used.
Recognising the rapid pace of innovation, our governance approach will remain agile. We will continuously update guidance, policies, and risk protocols in line with emerging technologies, legal requirements, and public expectations. Compliance will be monitored through existing digital and data governance processes but enhanced by DSIT guidance on AI assurance including the upcoming AI Governance Framework from the Office of Artificial Intelligence.
1.3 Make transparency a cornerstone of our AI approach
Transparency of how AI is used is fundamental to earning public trust and ensuring the responsible use of AI. We will deliver a dedicated AI Communications Plan to proactively engage staff, partners, and the public, providing clear, accessible updates on our AI initiatives, priorities, and impacts.
Our new online hub at ai.justice.gov.uk will serve as a central point of engagement, publishing regular updates on what we are exploring, piloting, and scaling. Where appropriate, we will open source our tools and solutions to promote reuse and collaboration.
We will meet government transparency requirements by publishing relevant use cases through the Algorithmic Transparency Recording Standard (ATRS) Hub, enabling public scrutiny, supporting accountability, and encouraging responsible innovation across sectors.
1.4 Scale an AI & Data Ethics Framework to build and buy responsibly
To ensure AI is developed and deployed in a way that is safe, fair, and accountable, we will embed ethics at the heart of our approach. In partnership with the Alan Turing Institute, we have developed a publicly accessible AI and Data Science Ethics Framework. This is a practical toolkit to guide developers, policymakers, and decision-makers from inception through to deployment.
The Framework is built around five core SAFE-D principles: Sustainability, Accountability, Fairness, Explainability, and Data Responsibility. These principles underpin our broader AI adoption approach, and we will now scale up the use of this Framework, ensuring it is consistently applied by all internal teams working with AI.
Given the sensitive nature of justice data, we place a high priority on privacy and security, working with our data protection and cyber security specialists throughout our product lifecycle. We will continue to meet and exceed legal and regulatory standards, including compliance with UK GDPR, government security requirements, regular privacy audits, robust access controls, and staff training.
External AI suppliers will be required to meet our high standards. We will expect appropriate access to models and services to ensure responsible practices throughout our supply chain. Tackling algorithmic bias is a key part of this commitment. Bias can arise from data inputs, model design, or unintended use. We will monitor outputs, apply de-biasing techniques, and use representative datasets to ensure AI augments, not replaces, human judgment.
1.5 Improve data as a prerequisite for AI use cases
High-quality, joined-up data is essential for AI to deliver value across the justice system. Currently, fragmented and inconsistent records across agencies limit our ability to personalise services, manage risk, and make timely decisions. Manual processes and siloed systems can also impact victim engagement and frontline efficiency.
To fix this, we are planning a set of targeted data initiatives to improve quality, governance, interoperability and infrastructure. These include:
- Linking offender data across systems through our BOLD and Data First programmes to improve public safety, rehabilitation, youth justice, prevention and victim services.
- Continuing work with the Home Office and CPS to support seamless criminal justice system information flow and enable the Safer Streets Mission, as part of the CJS Data Improvement Programme.
- Strengthening data fundamentals by improving governance, engineering, and analytics pipelines across MOJ agencies. This will deliver better insight, faster decisions, and consistent standards across justice domains.
- Defining data standards with the Online Procedure Rule Committee (OPRC) to ensure interoperability between dispute resolution providers and HMCTS, supporting innovation and faster case resolution.
Longer term, this work lays the foundation for AI-driven transformation. In the criminal justice sector, a single, secure identity for each individual would allow AI to generate dynamic risk scores, create personalised rehabilitation plans, and recommend tailored support based on health, education, or employment history.
In the civil and family courts and tribunals, consistent, accurate and sharable data could eventually enable AI to triage, advise and nudge parties toward dispute resolution tools and fair settlements where appropriate. This could resolve legal problems much sooner and prevent them from escalating. By embedding AI into the civil justice process, access to justice is widened by making verified expert advice and triage available on demand, while still preserving the right to have any final determination made by a human judge.
Case study: Using machine learning to create a single offender identity
Having a single, consistent ID for each offender is critical to making better-informed decisions across the justice journey. Fragmented and inconsistent data across different services has made it challenging to track an individual’s journey.
We are building a real-time system linking offender data across agencies to provide a single, consistent view. This tackles longstanding issues with duplication and missing data that can compromise sentencing and rehabilitation.
The system uses Splink, open-source data linking software developed by MOJ data scientists. It applies explainable machine learning to deduplicate records and ensure accuracy. This single view will reduce admin burden, support better decision-making, and enable more advanced AI tools to enhance public safety and rehabilitation outcomes.
1.6 Address infrastructure challenges to enable AI readiness
Reliable digital infrastructure is critical to effective AI adoption across the justice system. Despite progress, including improved Wi-Fi connectivity in courts and prisons, modernised IT systems, and expanded cloud capabilities, significant infrastructure gaps remain. Without addressing these, AI solutions cannot achieve their full potential.
We will build on the progress made through programmes such as HMCTS Reform and HMPPS Digital initiatives to strengthen connectivity, security, and computing capabilities across the estate. The underpinning infrastructure needs to comply with best practice for cyber security. By addressing these foundational challenges, we will ensure AI is not merely an isolated solution that doesn’t get past pilot stage, but a core enabler of efficiency and better outcomes in courts, prisons, and probation.
An example of this is the replacement of the legacy system used by probation officers to assess individuals’ risks, needs, and strengths. The new Assessing Risks Needs and Strengths (ARNS) tool will significantly improve the quality and consistency of offender management, while also creating the digital infrastructure needed to support future AI-enabled services.
1.7 Address financial and commercial barriers to procuring AI capabilities
As per ‘a blueprint for modern digital government’, there is a clear need for sustainable funding models that support run costs of digital services over several years. AI is not a one-off investment. Models must be monitored, re-trained, and improved over time. Consequently, long-term, sustained funding for AI and digital transformation is critical to realising the benefits of emerging technologies.
AI adoption must therefore be supported by a clear, multi-year funding model that ensures pilots can transition to scalable solutions. We will work closely with HM Treasury and DSIT to explore new funding mechanisms, including investment from cross-government digital transformation programmes. Without this dynamic funding approach, AI risks becoming an underutilised tool that does not move out of pilot phase, rather than a core enabler of justice reform.
As per the government’s AI Opportunities Plan, we will also focus on procuring smartly from the AI ecosystem. Innovative AI companies and startups can still face challenges in working with government, for example if they are not listed on established procurement frameworks. We will work closely with MOJ Commercial, DSIT and HM Treasury to explore flexible procurement models, engage suppliers early, and expand targeted innovation funding.
For AI, we will adopt a “test-and-learn” commercial model. Rather than commit all funding upfront, we plan to:
- Run short proof-of-value pilots of internally or externally built solutions (including partnerships with AI suppliers and SMEs), so we can validate performance in real-world conditions quickly.
- Evaluate results using clear and measurable metrics (e.g., accuracy, user satisfaction, cost-benefit, time savings).
- Scale or pivot based on evidence, extending contracts only when solutions demonstrate tangible value.
This also means streamlining procurement cycles, adopting proactive supplier engagement, and ensuring commercial frameworks are fit for AI’s unique needs. We aim to align AI procurement with the broader MOJ commercial strategy, remove unnecessary barriers, and maximise value for money.
Case study: Reverse Pitch events to scan for innovative UK SMEs[footnote 1]
To address key justice system challenges, MOJ and DWP ran a Dragons’ Den-style ‘Reverse Pitch’ event where SMEs pitched ideas to solve problems across government. Over three days in Leeds, London, and Manchester, 34 businesses, 203 stakeholders, and 57 subject matter experts came together to co-create solutions. One central challenge, ‘Reducing the Learning Curve’, aimed to streamline training for operational staff in our prisons and probation offices by balancing on-the-job demands with effective skill development.
Following the pitches, Taught by Humans, a homegrown UK company was awarded a three-month contract to develop and pilot an AI-driven interactive digital learning platform. This collaboration highlights how flexible procurement models can unlock innovative solutions from SMEs that may otherwise struggle to secure government contracts.
More broadly, government is not just a consumer of AI, it is a market shaper. Through how we fund, procure, and collaborate, we can influence the development of AI solutions that align with public values, safety, and social impact.
2. Embed AI across the justice system
AI has the potential to deliver significantly faster, fairer, and more effective services across citizen-facing, operational and enabling functions alike. It does so by freeing up time, increasing capacity; enabling more personalised interventions, facilitating access to justice and by improving strategic as well as operational decision-making. Critically, it can also improve support for victims and users of agencies like Criminal Injuries Compensation Authority (CICA) and Office of the Public Guardian (OPG).
More specifically, AI will play a role as we take forward the reforms following the Independent Sentencing Review. Consideration of the role of new technologies, including AI, is also part of the terms of reference for the Independent Review of the Criminal Courts.
However, this requires a strategic, phased approach to build confidence and capability over time. In line with the AI Opportunities Action Plan, we will adopt a clear “Scan, Pilot, Scale” methodology to guide responsible AI adoption:
- Scan the market continuously to identify promising AI technologies and prioritise areas where they can deliver the greatest benefit.
- Pilot targeted AI solutions to rigorously test their practicality, effectiveness, ethical implications and alignment with user needs. We will also review affordability and value for money and stop where appropriate.
- Scale proven solutions that really matter to ensure system-wide benefits.
To embed the use of AI, we are considering AI solutions as part of all our major transformation programmes and are developing a suite of core AI products that address common cross-cutting challenges across justice. These include AI products for general purpose productivity, search & knowledge management, speech & audio processing, image & visual analysis, agentic capabilities, learning, redaction and coding. AI will change how we build digital products, and we intend to follow a “build or buy once and use many times” approach for these products where appropriate. In practical terms, this means we will scan for solutions, champion interoperability, and not unnecessarily reinvent the wheel.
To embed AI across the justice system, we plan to:
2.1 Scale AI productivity tools across the system to free up time
AI-powered productivity tools have the potential to transform the day-to-day work of over 90,000 staff across the justice system, from probation officers and prison staff to policy, HR, and court operations teams. These tools can dramatically reduce time spent on routine, manual tasks like drafting, searching, note-taking, redacting, and form-filling, freeing staff to focus on higher-value work that requires empathy, expertise, and judgement.
Through extensive pilots and user testing, we have identified a suite of common, high-impact solutions with the potential to deliver substantial time savings and return on investment. These fall into four categories of Chat, Search, Speech, Automation and Coding.
2.1.1 Equip every staff member with a secure AI assistant
General-purpose AI chat assistants, such as Microsoft 365 Copilot, ChatGPT Enterprise, and government-built alternatives, can support staff in a wide range of everyday tasks: from drafting emails and summarising documents to managing inboxes, redacting information, and generating reports.
Following successful pilots across HR, policy, communications, probation and HMCTS enabling functions, we observed significant time savings and improved user experience for our workforce, particularly for staff with neurodiverse conditions, accessibility needs, or low digital confidence.
We will now launch an “AI for All” campaign, providing every MOJ staff member with secure, enterprise-grade AI assistants by December 2025, accompanied by tailored training and support (detailed in strategic priority 3). These tools will enable staff to work more efficiently, collaborate more effectively, and spend more time on human-centred tasks.
The judiciary is also committed to carefully exploring the opportunities that can be afforded by AI, while preserving the integrity of the administration of justice and maintaining public confidence. For example, AI-driven productivity tools could assist judicial office holders by streamlining tasks, including bundle summarisation, establishing chronologies, and facilitating age-appropriate drafting. Following a successful judicial Copilot trial for everyday administrative tasks, Microsoft 365 Copilot is being rolled out to leadership judges.
Case study: Adoption of safe and secure AI assistants for staff[footnote 1]
The MOJ is leading the public sector in adopting safe, secure AI assistants. Every member of staff now has access to a MOJ-secure version of Copilot Chat, with tools that help with tasks like drafting, summarising, and analysis. We are also rolling out premium versions of Copilot and are the first department to pilot ChatGPT Enterprise. This shift is already helping staff save around 30 minutes a day on average, freeing up time for higher-value work. Qualitative feedback has also been very positive including:
- “I have ADHD, so long reports can be overwhelming. Summaries are a lifesaver. I love it!”
- “What used to take me half a day now takes 20 minutes. I’ve clawed back hours each week just by getting help with the first draft, the structure, or even just thinking through a problem.”
- “We have daily cross-team calls, and planning the rota used to take hours. I gave it the rules, team availability, and out came a full month’s schedule in minutes. Even included the reasoning behind each choice.”
- “These AI tools have given me back my headspace. I finish the day with more energy and less clutter in my brain.”
We are planning a full rollout of safe and secure AI assistants by the second half of 2025. In a department with the largest workforce in government, this is key to building leaner enabling services so that more resources can go to our stretched frontline services.
2.1.2 Accelerate insight with AI-powered search and knowledge retrieval
Justice system staff often rely on large volumes of unstructured information, including operational procedures, policy documents, case records or legal precedents but traditional search tools struggle to surface what’s most relevant. AI-powered semantic or hybrid search understands meaning, context, and language variation, helping users quickly find the critical information they need.
Unlike standard keyword searches, semantic search understands context, meaning, and relationships between concepts. For example, prison officers could quickly review offender notes to identify risk indicators or rehabilitation opportunities, and caseworkers could swiftly locate relevant guidance or evidence, saving valuable time and improving decision-making.
This ultimately will reduce the time spent manually searching through lengthy documents, so staff can spend more time making informed decisions, and providing better support to those they serve.
As resources allow, we will scale these tools across justice services to reduce inefficiencies, speed up decision-making, and improve frontline service delivery.
Case study: Smarter searches for probation staff
Finding the right information quickly is critical in probation work, yet traditional keyword searches often fall short. Staff were spending time experimenting with different keywords to locate case details, leading to frustrating “search flurries.”
To solve this, MOJ Data Science introduced semantic search in the Probation Digital System (launched June 2025), powered by a Large Language Model (LLM). This AI-driven tool understands context, meaning, and variations in language such as recognising synonyms, misspellings, abbreviations, and acronyms. As a result, staff now receive more relevant results from their very first search, reducing search time and improving decision-making.
This AI-driven improvement reduces search time, enhances decision-making, and allows probation officers to spend more time focusing on offender rehabilitation, risk management, and community safety.
Case study: Quicker access to knowledge for courts and tribunals staff
In a busy court environment, finding the right information quickly and easily is critical to ensuring efficient and compliant operations. However, frontline staff have access to hundreds of operational procedures and guidance documents – meaning it can take tens of minutes to find relevant content, with staff often eventually resorting to colleagues for advice, impacting productivity of the whole team.
To solve this, we developed and piloted a generative AI knowledge retrieval assistant. Staff can ask questions using natural language and the tool interrogates over 300 unstructured documents before returning a simple summary, accompanied by a citation to the source document. Evaluation showed court staff could access the information they need so they can digest key information more quickly, ultimately increasing the pace of case administration in the justice system. Following a successful pilot, we are now exploring ways to scale the solution.
2.1.3 Use speech and translation AI to free frontline staff from admin burdens
Frontline staff spend significant time transcribing meetings and documenting interactions. This is time that could be better spent engaging with people and managing risk. We are piloting AI transcription and summarisation tools in probation services in Kent, Surrey, Sussex, and Wales to reduce this administrative load and improve the quality of recorded interactions.
Case study: Scanning for solutions to support probation officers with notetaking
In probation services, officers dedicate valuable time to writing case notes, which impacts their capacity to focus on rehabilitative work with offenders. High caseloads mean Probation Officers sometimes struggle to find the time to evidence and analyse complex conversations with People on Probation in a sufficiently detailed and consistent format. We are piloting AI-powered transcription and summarisation tools across probation services in Kent, Surrey, Sussex, and Wales.
Early results are very encouraging, with the tool reducing note-taking time by 50% and earning a 4.5/5 satisfaction score from officers. Freed-up time is being used for more meaningful work, better engagement, analysis, and decision-making while improving job satisfaction and reducing stress. In short, the tool is already boosting capacity, service quality, and staff morale.
Having our own validated AI transcription and translation tools also has huge potential in other areas of the justice system, such as in the courts and tribunals, minimising delays, improving access to justice and reducing reliance on costly external suppliers. For example, automated speech recognition (ASR) can speed up transcript production and lower costs where typing is still done by hand. It can also create transcripts where none currently exist, supporting the judiciary in formulating and writing decisions. The technology can also increase the accessibility and timeliness of court transcripts.
Case study: Exploring novel transcription capabilities to support the administration of justice across the courts and tribunals
HMCTS has conducted a 12-week pilot to assess how AI transcription of hearings and oral judgements could assist judges in preparing and writing decisions in the Immigration and Asylum Chamber. The technology could speed up manual transcription, enable transcripts where none currently exist, and improve public access to court proceedings. Early findings have been encouraging, and HMCTS is continuing to explore options to expand this pilot subject to funding. With a scaled transcription capability in place, there is significant opportunity for HMCTS to explore additional applications of AI, including summarisation, to support the effective and efficient administration of justice.
Current examples of AI translation pilots include AI-assisted interpretation for non-legal prison conversations and the translation of external publications such as this very document, which was translated into Welsh using an AI engine and quality-assured by the HMCTS Language Services Team.
2.1.4 Automate repetitive admin tasks with AI-powered agents
Staff across the justice system often work with multiple legacy systems, leading to time-consuming administrative tasks that take them away from front line delivery. Navigating these fragmented platforms frequently requires copying and pasting information between many legacy systems, slowing down processes and increasing frustration. This contributes to low morale, high turnover rates, and inefficiencies that impact frontline services. Outdated technology costs the UK public sector £45 billion annually in lost productivity.
As AI matures, large language models and other systems will evolve into “agents” capable of taking multi-step actions rather than simply generating text. This opens doors to deeper automation (e.g., populating forms, cross-checking records, scheduling tasks, coding and product development).
We will take a test-and-learn approach: starting with well-defined, low-risk tasks in a sandbox environment using synthetic data. All agentic AI will operate under strict identity, permission, and accountability controls to ensure safe, auditable use.
Case study: Using intelligent document processing to speed up processing of paper forms
HMCTS processes over eight million paper forms each year, with staff spending significant time manually uploading and reviewing them. This slows down case processing and takes time away from higher-value work.
We piloted machine learning and computer vision technology to extract and analyse information from paper-based forms. The technology enabled faster and more accurate processing, reduced the number of exceptions, and cut configuration costs. Subject to funding, we plan to scale this capability across our bulk scanning service.
2.1.5 Support the use of AI coding assistants to accelerate development and modernise legacy systems
AI coding assistants are already in widespread use across the software industry, with the 2023 Stack Overflow Developer Survey finding that 80% of developers were using them in their personal projects. For government teams, delaying adoption risks falling behind on productivity, developer satisfaction, and codebase maintainability.
We will support the use of trusted tools such as GitHub Copilot, Cursor, Codex CLI, and open-source options like StarCoder2, with strong guardrails in place. These assistants can help developers generate boilerplate code, explain unfamiliar codebases, and reduce time spent on repetitive tasks. They are particularly valuable for working with legacy systems where documentation is poor, or domain knowledge is thin.
We also see potential in tools like Lovable and v0 by Vercel that enable product managers and designers to turn ideas into functioning front-end prototypes, shortening the path from concept to testable product. In time, this could help multidisciplinary teams experiment and iterate faster on digital services.
In line with DSIT guidance, we will encourage responsible experimentation within safe environments and share good practice across teams. Used correctly, coding assistants can reduce technical debt, enhance developer experience, and help us deliver faster, more resilient public services.
2.2 Explore how AI can improve scheduling and make better use of resources
AI tools can quickly create the best schedules by analysing real-time data about staff, capacity, risks, and priorities. This technology has already helped the NHS schedule surgical teams more efficiently and could also help manage capacity in the justice system. Because AI can update schedules automatically as things change, like cancellations or transfers, it reduces the need for manual rescheduling, avoids wasted time, and maintains safety and service quality. We will explore how this could help with managing prison capacity and other scheduling challenges such as listing in courts and tribunals.
2.3 Explore AI solutions for personalised education and learning
We will explore how AI-powered learning platforms can adapt educational content for staff as well as offenders. As per the prison education statistics, many offenders enter the justice system with low levels of education, making reintegration and employment more difficult. It is well evidenced that poor educational attainment is a key driver of reoffending, yet access to teacher time and high-quality learning opportunities in prisons remains inconsistent.
A more personalised and scalable approach is needed to support rehabilitation and employment. AI-powered learning tools have already demonstrated their ability to provide tailored support in mainstream education. Platforms like Khan Academy’s AI tutor, Khanmigo, and Oak National Academy’s lesson planning assistant, Aila, can help to make personalised learning available at scale.
Case study: Generative AI to help prisoners learn in new ways
HMPPS has explored how generative AI could improve prisoner education through the Digital Education Content Pilots, which ran until Spring 2025. One supplier used AI to create content frameworks, which were then adapted and delivered to prisoners via our virtual learning platform. Topics ranged from geography and history to digital skills and employability. We are now looking at how similar AI tools could support tailored learning for people in prison and on probation.
2.4 Explore how public facing AI assistants could improve access to justice
Public facing assistants have the potential to enhance customer experience and improve access to justice. We are working with DSIT as part of HMT’s Transformation Fund to test and deploy AI applications in MOJ call centres with the aim of streamlining case handling and enhancing service delivery.
Chatbots could also help citizens understand their needs or navigate civil or family law to resolve their dispute without going to court, receive criminal injuries compensation guidance, or understand prison visitor rules. This would improve awareness, accessibility, and alignment with key elements of the Victims’ Code. We are developing internal products and working with the Government Digital Service (GDS) on the emerging GOV.UK Chat solution to explore how our teams can adapt it for justice use cases.
Case study: Digital Justice System AI assistant
As part of the Digital Justice System initiative, we are developing an AI chatbot which supports users in resolving their child arrangement disputes, a policy area which has a vast and confusing landscape of information. Responses provide guidance on alternative routes to dispute resolution which could support a reduction in unnecessary court applications. The chatbot is trained on gov.uk content and uses AI persona scalable testing capabilities.
2.5 Improve policy and strategic decision making through AI
We will explore and pilot AI tools that support smarter, faster and more inclusive decision making across policy and strategy functions.
Machine-learning models can help identify patterns in complex evidence bases, including social, economic and environmental data, to flag emerging risks or highlight groups likely to be affected by proposals. Generative AI can assist in surfacing common themes from stakeholder feedback and consultation, while scenario-planning algorithms allow teams to test “what-if” options and assess potential fiscal, operational or equity impacts.
These tools can reduce the burden of synthesis and forecasting, allowing civil servants to focus on judgment, values and coalition-building, while grounding decisions in evidence and insight.
2.6 Improve decision making through non-generative AI
While generative AI is capturing much of the spotlight, there are many applications where other forms of AI are better suited. Non-generative AI, including statistical models, deep learning, and other predictive machine learning methods, remains a cornerstone of the justice system’s digital transformation.
Our longstanding capabilities in data-driven risk assessment and resource prioritisation continue to deliver tangible benefits. For example, actuarial tools maintained by MOJ Data Science routinely support human-led assessments of reoffending risk and rehabilitation programme allocation. Other deep learning models help staff estimate violence risk in custody or extract key evidence from illicit communications. We continuously monitor and evaluate these tools, while researching new methods to improve them.
2.7 Evaluate success regularly and adjust our approach as we go
The success of all AI solutions will be evaluated using clear, quantifiable metrics, ensuring alignment with department goals. For each project, metrics will be defined at the outset and success will be continually assessed with the help of AI Operations, enabling us to stop, pivot, or scale projects as necessary. Key cross-cutting metrics include:
- Departmental priority outcomes: what direct or proxy metrics can we use to estimate the solution’s impact on the outcomes we care about?
- Accuracy: how well the AI solution performing its intended task?
- Satisfaction: Will the solution improve job satisfaction for staff or customer satisfaction for service users?
- Reach and adoption: How many end users currently perform the activity, and how many will benefit from the solution?
- Time Gained: How much time can be saved by augmenting the activity?
- Return on Investment: Does the solution provide value for money?
3. Invest in our people and partners
Realising the full potential of AI in the justice system is not just a technological challenge. It demands a fundamental shift in how we work. AI has the power to transform services, build a more productive and agile civil service, and drive economic growth. But without proactive engagement with our people, trade unions, and partners, and without tackling the non-technical barriers to change, adoption will stall.
To truly make the most of the AI opportunity, we will take a more agile, collaborative, and innovative approach to ensure AI delivers real impact across the justice system. We plan to:
3.1 Attract, develop and retain technical talent
To position the Justice AI Unit as a hub for world-class expertise, we have launched the Justice AI Fellowship, a flagship programme designed to attract top AI talent into Ministry of Justice. Inspired by the No. 10 Innovation Fellowship, this initiative will bring senior-level AI and transformation specialists into government to ensure our services benefit from the latest research, innovation and knowledge transfer. More broadly, we will also recruit specialised AI, digital and data scientist skillsets in productionising capability, such as DevOps and MLOps talent, to support the transition from pilot to scale.
But hiring top AI talent into the Civil Service remains a significant challenge as noted in the government’s AI Opportunities Action Plan. AI specialists in the private sector command higher salaries, and the lengthy government hiring process makes recruitment harder. We will continue our work with HR, DSIT and Government People Group (GPG) in Cabinet Office to create faster processes, competitive pay, and clearer career pathways, ensuring we can recruit and retain the AI professionals needed in both technical and non-technical roles, such as product managers and AI policy experts.
There is also a growing shortage of skilled AI professionals, and the UK faces a particular challenge in meeting this demand. To address this, the AI Opportunities Action Plan set out a plan for the Government to work with universities to develop new courses co-designed with industry. However, building academic expertise is only part of the solution. We also need to ensure that graduates have clear pathways to enter the workforce and apply their skills in critical areas like the justice sector.
To meet this need, we are launching the Justice AI Academy internship programme. This initiative will attract top graduates from the nation’s leading AI and data science programs, offering them real-world opportunities to tackle complex challenges in justice. By working alongside MOJ staff and applying their skills in practical settings, the next generation of AI experts will be empowered to shape the future of justice services and apply for permanent positions if available.
3.2 Strengthen AI awareness and skills across all staff
As AI rapidly evolves, it is essential that our existing digital and data workforce, including software engineers and data scientists, keeps pace with new advancements. Meaningful human control over AI requires algorithmic literacy, risk- and ethics insight, sound judgement to balance AI advice with context and clear communication skills to escalate concerns and guide users. To support this, we will scale an AI Talent Accelerator, a hands-on learning course designed to develop machine learning expertise and accelerate general AI adoption.
Led by the Justice AI Unit in collaboration with the Government Digital Service (GDS), the course will take a practical, hackathon-style approach, enabling existing digital staff to apply AI techniques to real justice challenges. This initiative will strengthen AI capabilities and help fill critical roles across the justice system.
AI also presents an opportunity to close the digital skills gap, which remains a significant challenge for our department. 36 percent of MOJ staff report not having confidence in using digital tools, and even fewer have AI knowledge. Without action, the justice system risks lagging in AI adoption. We will therefore invest in training for leaders, policymakers, lawyers and frontline staff, focusing on both the benefits and risks of AI, and ensuring its responsible and legal application.
Additionally, we recognise that staff may be resistant to AI if they lack clarity on its purpose or fear negative impacts on their roles. To address this, we plan targeted AI awareness campaigns, engage with trade unions, involve frontline users early in solution development, and offer transparent pathways for feedback. A network of AI Champions will also be established to provide peer-to-peer support, guide effective tool usage, and escalate challenges to the Justice AI Unit, strengthening trust and encouraging domain-specific AI exploration across our 95,000-strong workforce.
Effective AI adoption requires closer collaboration between policy, digital, data, AI, and operational teams. Embedding policymakers and operational experts alongside AI and digital teams will help align technology initiatives with policy requirements and practical implementation needs.
3.3 Take a proactive, employee-led approach to the future of work
AI adoption often fails not because of technological limitations, but because people and processes are not supported to adapt. With AI evolving rapidly, we recognise the concerns that come with such change. To ensure a smooth transition, we will work closely with our Director General for People and Capability, as well as with our trade unions, to support staff whose roles are evolving and to bring about the new opportunities that AI will bring. This will include training initiatives, proactive job redesign, and incentives that promote a people-centred approach to technology adoption. AI should enable human talent, skill and judgement, not replace it.
To encourage engagement and a two-way approach, we will provide multiple channels for staff to share ideas, concerns, and feedback on AI. In addition to regular engagement with our recognised trade unions, these include routine staff forums, a dedicated email mailbox for direct contact with the Justice AI Unit, and engagement through our AI Champions Network. By offering these forums for engagement, we empower staff to suggest improvements, flag issues promptly, and help shape the effective and responsible use of AI in our organisation.
3.4 Support sector growth by becoming the best state partner for AI
The UK’s legal sector is a global leader, contributing £37 billion to our economy and hosting 44% of Europe’s legal tech startups. The Ministry of Justice is committed to further enhancing this strength by actively supporting AI-driven innovation.
Through community engagement events, hackathons, and reverse pitch sessions, we aim to continue to build stronger partnerships between government and innovative startups. We will also be continuing to build on our close working relationships across the legal sector.
Unlocking justice data is key to advancing the UK’s AI ambition. Building on current efforts that link datasets for research and policy, we plan to work with DSIT to prioritise and release high-value data through the National Data Library. Guided by clear standards, this will create an attractive environment for researchers and businesses, accelerating innovation and improving justice outcomes.
We aim to play our part in fostering an environment where AI companies in our sector can scale here in the UK rather than relocating or selling to foreign buyers prematurely. By providing a stable policy framework, facilitating access to public sector data where appropriate, and running flexible procurement pilots, we will help foster homegrown AI solutions that can stand on the global stage.
Case study: UK AI firm, Luminance, is leading the way on legal services AI[footnote 1]
AI can quickly analyse large volumes of evidence, such as documents, video footage and digital data, identifying relevant information faster than human review. This reduces delays in case preparation. For instance, the criminal defence team, “The 36 Group”, was able to review 10,000 documents to prepare the defence argument for a high-profile criminal trial, using Luminance’s AI-powered platform, becoming the first instance of AI being used at the Old Bailey. The defence team estimated they saved £50,000 in costs during the disclosure phase and reduced their review time by four weeks.
3.5 Work with regulators to support responsible AI adoption in the legal sector
Rapid advances in AI have the potential to outpace existing laws and regulation, creating uncertainty for legal professionals. In line with this Government’s pro-growth regulatory approach, we will continue working closely with key regulators, including the Legal Services Board (LSB), Solicitors Regulation Authority (SRA) CILEX Regulation and the Bar Standards Board, to proactively test, understand, and guide responsible AI use. We will also extend our training on AI to regulators where appropriate.
We are keen to ensure that regulation remains proportionate, evidence-based, and flexible enough to evolve alongside emerging AI technologies. This will help to safeguard public trust, uphold the rule of law, and provide the regulatory stability and clarity that allows innovation to flourish.
Case study: The world’s first AI driven law firm[footnote 1]
The SRA has recently approved the world’s first AI driven law firm, Garfield AI, which supports businesses seeking to recover small debts of up to £10k pre-action and by bringing appropriate court action, using the county court’s existing digital data standard for issuing claims. The example demonstrates the potential of AI to enhance innovation in the LawTech sector, the potential for pre-action AI to reduce demands on court capacity, the importance of data standards for digital interoperability, and more broadly the power of AI to enhance access to justice.
Our ambition is to become the best state partner for innovators anywhere in the world. Through following through on this commitment, we will maintain safety, foster public confidence, and unlock new possibilities for the future of legal services in the UK and around the world.
Our roadmap for delivery
The three strategic priorities will be delivered concurrently over a 3-year period subject to funding, aligning with the “Scan, Pilot, Scale” approach. We will develop and maintain an integrated AI roadmap to keep the system aligned and refine it iteratively, including as we take forward our response to the Independent Sentencing Review and the Independent Review of the Criminal Courts. During implementation, we will collaborate with wider justice-sector partners such as the Home Office, the Crown Prosecution Service, and legal services regulators.
- Year 1 (from April 2025): We will establish strong foundations, build capability, and deliver early wins. This includes rolling out enterprise-wide productivity tools to reduce administrative burden and piloting domain-specific AI applications in areas such as chat, search, and transcription.
- Year 2: We will scale what works and embed AI more deeply into our transformation programmes, reinvesting the time and insight gained into better frontline delivery and decision making.
- Year 3: Building on early success, we will deliver scaled, interoperable solutions across the system. AI will be integral to our way of working, shaping frontline delivery, operations and enabling functions alike and propelling us toward our long-term vision.
This will enable us to then take bigger steps towards a future where accurate data can be linked to individuals and across the justice system to fully exploit the predictive and decision-aid opportunities generative AI has to offer.
Conclusion
Realising this vision demands bold leadership, sustained investment, and meaningful collaboration. By strengthening our foundations, embedding AI strategically, and transforming our way of working, we can build a justice system that is not only fit for today but truly equitable and prepared for the challenges of the future.