How Ofsted looks at AI during inspection and regulation
Published 27 June 2025
Applies to England
What is AI?
‘Artificial intelligence’ (AI) is an umbrella term for a range of technologies based on algorithms (instructions that tell a computer what to do and how to do it) that are designed to complete tasks that previously required human thinking.
The UK government has made AI a core part of delivering its missions, as set out in its AI Opportunities Action Plan. In line with this, as we explained in our policy on our approach to AI, we are already exploring how AI can help us to make better decisions and work more efficiently, while ensuring we use it responsibly and fairly.
This statement builds on the position set out in our policy, and sets out the broad principles for how we will consider uses of AI encountered on inspection. This document does not represent a final position. Ofsted’s approach to AI will be updated as our understanding of the use of AI in education and care settings continues to develop.
Statement on how Ofsted looks at AI during inspection and regulation
Ofsted supports innovation and the use of AI where it improves the education and care of children.
We do not look at AI as a stand-alone part of our inspections, and we do not directly evaluate the use of AI, nor of any AI tool. However, inspectors can consider the impact that the use of AI has on the outcomes and experiences of children and learners. That includes both how providers themselves use AI, and how they respond to the use of AI by others – for example parents, children and learners.
Similarly, we do not consider the use of AI in and of itself in our regulation practice. However, our evaluation of the impact of AI use in the settings we regulate may be used to help inform any enforcement action.
We do not require our inspectors to actively look for AI use, and their inspection reports will not refer to AI unless it is crucial to their broader inspection and regulatory decisions. If inspectors find that a provider’s use of, and response to, AI has a significant impact on children and learners, they will record it in the same way as any other evidence gathered during inspection.
We do not expect inspectors to understand how AI software has been designed – any more than, for example, they need to understand the programming of a provider’s firewall software to evaluate their data security arrangements.
Expectations regarding the use of AI in the settings we inspect and regulate
We do not expect or require providers to use AI in a certain way, or to use it at all. At this stage, we do not have the evidence we would need to define good use of AI for the purposes of inspection or regulation.
However, providers should be aware that some children or staff members are likely to be using AI in connection with the education or care they receive or provide – for example, pupils using it to help complete homework, or support staff using it to help take case notes.
Any evaluation of the use of AI will ask whether the provider has made sensible decisions – the same as we would with any other tool. Inspectors may ask leaders how they make sure that any use of AI supports the best interests of children and learners. For example:
- if a school uses AI to identify causes of absence, inspectors may consider how it forms a part of the approach to tackling absence
- if a local authority uses AI-generated summaries of child protection conferences, inspectors may want to understand how it makes sure these are accurate
Use of AI by children and learners
Where relevant, inspectors will explore how leaders make sure that, when AI is being used by children and learners at their setting or in their home, this is in the children’s and learners’ best interests.
For example, inspectors may look at the school’s or provider’s policy on pupils’ use of AI in lessons, or at home, to complete homework and coursework assignments. In social care settings, they may look at how staff make sure that children’s use of AI on personal devices is in their best interests. If children or young people are using AI inappropriately, inspectors may evaluate how the provider has responded and addressed the impact of this.
Evaluating the specific risks of AI
Like any new technology, AI presents both opportunities and risks in education and social care settings. While the risks associated with AI will not be evaluated separately in our inspections, they will be addressed when they have implications for areas that are already considered. For example:
- Data protection: many applications of AI use large amounts of data, which can include personal data. When appropriate, inspectors may ask questions similar to those they ask about any other instances of the collection, storage and processing of personal data.
- Safeguarding: AI can pose new and unique safeguarding risks. Inspectors will give appropriate weight to this as part of their more general consideration of a provider’s safeguarding culture.
- Bias and discrimination: another defining characteristic of AI is that it can make decisions or create outputs based on data without direct human intervention. This introduces a risk that the AI will perpetuate bias and discrimination that is present in the data that it processes or has been trained on. Despite the new context, the risk of bias and discrimination, and measures taken by a provider to mitigate this, are already part of what inspectors consider when collecting evidence.
Inspectors may ask what steps the provider has taken to make sure their use of AI properly considers these risks. Were those risks considered when introducing the AI application? Do they have any assurance processes to identify emerging risks? Any evaluation that Ofsted makes is about the provider’s decision-making, what they have considered, and the impacts on children and learners, not about the tool itself.
Inspectors only need to consider AI when it is relevant to something specific in their evaluations, such as the examples above. Some uses of AI will not be relevant to the inspection and will not be considered by inspectors – for example, Google Maps uses AI to give users directions. And many providers, children and learners will not use AI at all.