Guidance

Ofsted's use of AI

Updated 21 October 2025

Applies to England

Introduction

We are committed to ensuring that our use of artificial intelligence (AI), including generative AI:

  • complies with legal obligations
  • mitigates risk to the organisation, providers, the public and employees
  • is responsible and in accordance with all relevant government and Civil Service guidance

We acknowledge the recent rapid advances in AI technology and their impact on our work. AI has the potential to help us work more effectively to deliver our strategic priorities. We are guided by our values of professionalism, courtesy, empathy and respect.

Our published approach to AI explains that AI can bring benefits to our work, including in assessing risk, automating tasks, and generating new insights.

This document sets out how we apply that approach through our work. It covers our own use of AI, making clear what we consider to ensure we use AI lawfully, ethically and responsibly.

The principles set out apply to:

  • all Ofsted staff and contracted suppliers
  • all aspects of AI use by Ofsted or on behalf of Ofsted
  • all Ofsted systems and assets

This statement relates to our own use of AI. We have separately published a statement on how we look at AI during inspection.

Definitions

AI is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Generative AI is a is a type of AI that can interpret and generate outputs including text and images.

Machine learning is a subset of AI and refers to the development of digital systems that can improve their performance on a given task over time through experience.

Guiding principles

Our approach to AI is consistent with the government’s framework, in particular the 10 principles that guide the use of AI in government organisations (currently the AI Playbook for the UK Government).

This means we make sure we understand what AI is being used and what its limitations are.

We ensure there is meaningful human control of AI processes, and appropriate human oversight of the use of AI. Human oversight ensures that AI use does not undermine human autonomy or directly cause other adverse effects.

We take responsibility for managing complete AI lifecycles. This includes scrutinising and testing AI in development before we consider implementing it, and testing its accuracy and efficacy before and throughout the AI lifecycle.

Before approving the use of any AI, we record the intended benefits in terms of time, cost and quality. We scrutinise security and information management controls, workforce training plans and examples of permitted and prohibited use of the AI.

We work openly and collaboratively internally (for example, involving legal and technology expertise from the outset) and externally (including working across government as appropriate).

Lawful, ethical and responsible use of AI

We use AI lawfully, ethically and responsibly.

Our use of AI complies with our existing legal obligations and follows His Majesty’s Government’s guidance.  

We follow data protection requirements and obligations at all stages. This means that whenever we propose to use AI to process personal data, we initiate a data protection impact assessment and review this appropriately. Accordingly, we follow the standards set out in our Personal information charter when collecting, holding or using personal information.

This extends to any AI system into which information is entered. We assure ourselves of what happens to the information if it is compromised or shared with third parties or has left our possession.

We consider our obligations under the Human Rights Act 1998 and Equality Act 2010 when developing and using AI. We also consider the Public Sector Equality Duty (or PSED) and initiate and review equality impact assessments appropriately.

When we use AI to help us with our statutory powers (for example, maintaining registers, inspecting, reporting), we are particularly mindful of any regulatory or public law standards that apply. These include procedural fairness, proper exercise of discretion and freedom from bias.

We never allow AI systems to impede our ability to inspect fairly and impartially. It is vital that AI does not undermine or compromise inspectors’ judgements, or our ability to respond flexibly and empathetically to the concerns of the public or providers.

We are aware that complex ethical considerations can arise through the use or proposed use of AI. Ethical considerations can often overlap with legal and data protection obligations. Our use of AI aligns with the ethical themes set out in the government framework (currently the AI Playbook for the UK Government), which are:

  • safety, security and robustness
  • transparency and explainability
  • fairness, bias and discrimination
  • accountability and responsibility
  • contestability and redress
  • societal wellbeing and public good

We consider the environmental and sustainability impacts of the use of AI tools. We aim to use AI models, solutions and processes that have a lower impact on the environment. This may involve a preference for smaller models or reuse of models that have already been trained.

Keeping data private and generative AI tools secure

We follow the standards set out in our Personal information charter when collecting, holding or using personal information. We follow all applicable UK and EU data protection laws in how we treat personal information.

We minimise our use of real personal data when developing and testing AI solutions. We only use personal data once the appropriate data protection impact assessments, equality impact assessments and transparency statements are in place.

Before using an AI tool, we make sure these datasets are appropriately secured. This includes processing sensitive or personal data in accordance with data protection legislation. Where necessary, the algorithms, code and prompts used in AI tools may need to be stored securely as well.

We don’t use unsecure or public tools to process personal or sensitive information. We don’t input classified and sensitive information, or information that reveals the intent of government (that may not be in the public domain) into public tools. This involves maintaining a register of approved and unapproved uses of AI (described below) and, in accordance with our security and acceptable use policy, blocking staff access to websites and software.

Transparency and register of Ofsted’s use of AI

We understand what AI we are using and are appropriately transparent when we use AI in a way that affects individuals and organisations. Through our governance and oversight of the use of AI, we:

  • make it clear when we are using AI in ways that affect individuals and organisations, for example by using data protection impact assessments and privacy notices
  • directly communicate and explain our intention to use AI at the point when we collect data from the public
  • make clear what data we are using, along with the AI algorithms and models applied (unless transparency could undermine our inspection and regulatory processes and create unintended consequences)
  • make information available on the benefits of the AI being used
  • describe how we monitor and evaluate the AI we use
  • describe how AI outputs affect our decision-making processes

We maintain an AI Register. We also apply the Algorithmic Transparency Recording Standard, publishing details about our use of AI where appropriate.

Where possible, we specifically disclose when we have used AI substantially in producing a published document.

Roles and responsibilities

We recognise that we are responsible for the way we use AI. Accordingly, we have set out clear roles and responsibilities in relation to AI.

Our AI Committee is responsible for implementing an AI strategy. It reviews and approves AI use cases from a strategic, legal, security, data protection and ethical perspective.

Each tool or process using AI has an internal, senior AI owner, who is responsible for ensuring that it is developed, implemented and used according to the approved use case.

Staff are responsible for ensuring that they use AI according to the approved use case and that the output of the AI is sufficiently accurate for the intended task.

We ensure our workforce is appropriately skilled and trained in using AI.