‘The biggest risk is doing nothing’: insights from early adopters of artificial intelligence in schools and further education colleges
Published 27 June 2025
Applies to England
Executive summary
The launch of ChatGPT in November 2022 dramatically increased public access to generative artificial intelligence (AI) and brought it into the mainstream. The availability of publicly-available AI tools like Microsoft Copilot and ChatGPT have also sparked widespread interest and discussion about the role that generative AI can play in education.
The UK government is ambitious for AI and views it as a fundamental part of its mission to break down barriers to opportunity for children and young people.[footnote 1] The Department for Education (DfE) has stated that:
If used safely, effectively and with the right infrastructure in place, AI can ensure that every child and young person, regardless of their background, is able to achieve at school or college and develop the knowledge and skills they need for life.[footnote 2]
The UK government’s AI Opportunities Action Plan sets out expectations from the Department for Science, Innovation and Technology for AI to improve education and ensure that regulation supports innovation. Regulators, including Ofsted, will be required to publish annually how they have enabled AI innovation in their sector.[footnote 3]
Generative AI can be used to streamline administrative tasks, plan lessons and support assessment.[footnote 4] This has made it particularly attractive for reducing teacher workload, so that teachers can focus on delivering high-quality teaching and on working directly with pupils. Although the UK government has identified education as an area that can benefit from AI, adopting it in schools and further education (FE) colleges faces considerable challenges. Half of teachers in England responding to a DfE survey now use generative AI tools.[footnote 5] However, of those who do not use them, 64% say they do not know enough about AI to use it in their role and 35% are concerned about the risks, particularly around data privacy, bias and safeguarding, and users’ ethical and responsible use of AI.[footnote 6]
AI in education use is still a new and developing area and is largely experimental.[footnote 7] AI can certainly save teachers time, and there is increasing evidence of its impact on the process of teaching and learning.[footnote 8] However, there is no conclusive, reliable evidence about its benefits and limitations, particularly its ability to lead to gains in knowledge.[footnote 9] A recent analysis of 143 literature reviews of AI in education concluded that:
The effectiveness of AIED to improve learning outcomes remains far from conclusive, especially in the long-term. As we currently stand, most studies are explorative, short-term and in limited domains [for example] language and academic writing.[footnote 10]
There is also no clear guidance yet about how to measure the impact of AI in education to show whether it is effective at improving educational outcomes or what the best measures of success would be.[footnote 11]
The DfE, therefore, commissioned Ofsted to carry out a study on AI in education to investigate how ‘early adopter’ schools and FE colleges are embedding AI to manage risks, support teaching and learning, and streamline administrative tasks and processes. By ‘early adopter’, we mean ‘someone who is one of the first people to start using a new product, especially a new piece of technology’.[footnote 12]
Our study was carried out in 2 stages. The first stage included having conversations with experts and reviewing the emerging evidence on AI use in education. This helped inform the research questions for the second stage. This stage involved online interviews with senior leaders and those leading AI adoption from 21 schools and FE colleges already invested in using AI in England.[footnote 13] The evidence we collected from the interviews provides insights into leaders’ reasons for adopting AI and shows how they have navigated and overcome some of the obstacles. This information may help other schools and FE colleges when considering their own approaches to using AI.
Our findings show that adoption of AI is complex and has many aspects to it. We can see that, in deciding to adopt AI, providers take different factors into account, depending on the ways they want to use it. These can range from using AI to streamline administrative tasks to allowing pupils use it in a direct and interactive way. Other factors such as differences in adoption readiness, existing experience of digital technology and the availability of resources also affect AI use. For example, schools already committed to using digital technology as part of teaching and learning will have staff expertise and experience of using EdTech and the hardware needed to support the integration of AI.[footnote 14]
It is important to distinguish between these different aspects of AI and how, when and why they are used. Doing so helps us understand how AI is used in different educational settings. It can also suggest how future policies and support mechanisms might be tailored to address the unique challenges and opportunities these different aspects of AI offer.
All the leaders we spoke to were curious and cautious in their adoption of AI. The AI landscape is still changing rapidly, and they need to balance innovation and risk. Typically, leaders saw beyond AI as a shiny new product, and none viewed it as a cure-all for education. Leaders had found ways to integrate AI into processes that they felt were likely to be beneficial for staff, learners and pupils in their college or school.
Most schools and colleges had an AI champion who was instrumental in getting senior leaders to embrace AI and bringing staff on board. In most cases, the champion was a teacher who had previous relevant experience or expertise in technology and AI. In other cases, it was someone who simply had a keen interest that was further fuelled by the release of publicly-available AI tools such as ChatGPT. AI champions typically created a ‘buzz’ around AI and played a vital role in demystifying it so that staff began to understand what it was and how they could use it. They used their expertise to address staff anxieties and build confidence so that staff did not feel overwhelmed. AI champions could determine what help teachers needed and show how AI could be used for this purpose, rather than using AI more generically. In larger schools and FE colleges, and multi-academy trusts (MATs), AI leadership brought together their data management teams, IT systems managers and curriculum leads.[footnote 15] This kind of structure recognised that, unlike other forms of technology, AI requires skills and knowledge across more than one department.
Senior leaders made sure there was a clear vision for AI which prioritised safe and ethical use by staff and pupils. They tended to take initial small steps to explore the potential of AI before adopting it across their school, college or MAT. Leaders frequently made sure that teachers had the space and time to experiment and learn how AI could support and enhance their own practice. This created a culture of openness and trust that encouraged innovation.
The use of AI tools in these providers was divided between those who told us the initial reason for introducing AI was to reduce workload, and those who wanted to use it to directly support pupil and student learning. However, we also found that the use of AI often shifted with time. Leaders were rarely prescriptive about the tools teachers could have, and many had a list of approved AI tools teachers could use. Interestingly, a few leaders had already developed, and were testing, their own AI chatbot, while others were in the process of doing so.[footnote 16] Several leaders also highlighted how AI allowed teachers to personalise and adapt resources, activities and teaching for different groups of pupils including, in a couple of instances, young carers and refugee children with English as an additional language.
Commonly, the leaders we spoke to were clear about the risks of AI around bias, personal data, misinformation and safety. They had different mechanisms and procedures to address these. Some had a separate AI policy, while others had added AI to relevant existing policies including those for safeguarding, data protection, staff conduct, and teaching and learning. The pace of change meant that many leaders were updating their AI policies as often as monthly. Importantly, these leaders encouraged regular and open discussion about AI between staff and with pupils to mitigate some of the risks associated with AI. This included developing their curriculum to teach pupils about the advantages and disadvantages of AI and how to use it safely.
Despite some of the perceived strengths of AI, leaders were also clear about several areas that they were looking to develop further. For instance, leaders regularly described their vision for AI to enhance learning or reduce teacher workload, but most were at the early stages of developing a longer-term strategy for AI that set out how it was integrated into their curriculum. Leaders had not yet thought systematically about integrating AI with pedagogy because of the rapid pace of change and because there are still not many AI tools tailored to individual school or college contexts. Some leaders had yet to think strategically about what success with AI looked like or how to evaluate its impact.
These findings provide insights into how early adopters have navigated the challenges of AI and what they see as the benefits of using it. They have helped to inform Ofsted’s own position on AI during inspection. Our statement ‘How Ofsted looks at AI during inspection and regulation’ makes it clear that AI is not a stand-alone part of Ofsted’s inspection and regulation practice. Inspectors will not directly evaluate the use of AI, nor any AI tool. However, when they come across the use of AI, inspectors will consider not only how it is used by providers, but also how it is used in a provider’s setting by others (including staff, parents, pupils and learners). We will also use the findings of this research to develop training for inspectors, to ensure that they are able to account for the use of AI when considering experiences and outcomes in this way.
This small-scale research has been an important first step to enable AI innovation in the sector. However, the study does not reflect the majority of schools, MATs and FE colleges that are yet to adopt AI. We only spoke to leaders and do not know the views and experiences of teachers, pupils and learners. Further research with a wider group of stakeholders could provide more in-depth understanding of the way AI is used, the impact it can make, and the implications for our inspection and regulation practices.
Introduction
Since the release of open-source AI, its adoption in schools and colleges has been met with both enthusiasm and concern by education experts, policymakers and practitioners. Generative AI has been perceived as having the potential to enhance learning and reduce administrative burdens.[footnote 17] However, it is also seen as a risk to academic integrity and cybersecurity.[footnote 18] Consequently, the DfE’s call for evidence on the use of generative AI highlighted some of the ways in which schools and FE colleges currently use AI, the challenges they face and the impact its use has on them. The DfE has subsequently provided guidance on AI use in education. The guidance emphasises the importance of curriculum and responsible AI integration. It also lays out the relevant legal responsibilities.[footnote 19]
The UK government’s AI Opportunities Action Plan has identified education as an area where AI could have a positive impact. Recent initiatives have further highlighted the growing importance of AI in education. In August 2024, the DfE announced a £4 million investment to develop a set of AI tools for different ages and subjects to help manage the burden on teachers for marking and assessment.[footnote 20] The DfE has also funded development of Oak National Academy’s ‘Aila’ (AI lesson assistant) and developed training materials on AI with the Chiltern Learning Trust and the Chartered College of Teaching. These are in addition to a new EdTech Evidence Board project funded by the DfE and led by the Chartered College of Teaching. This will evaluate available evidence submitted by AI developers about the effectiveness and impact of their products, as well as whether the products are ethical and comply with data protection laws.
Despite the hype surrounding it, AI adoption in schools is still at an early stage and adopters remain in the minority. For instance, 69% of UK teachers who responded to a 2024 Bett survey said their school has not implemented AI, and 32% of school and college leaders in England are not considering any changes to account for AI.[footnote 21] However, a survey by the National Literacy Trust found the proportion of 13- to 18-year-olds who say they have used generative AI has risen from 37% in 2023 to 77% in 2024. The survey also found that, although nearly half (47.7%) of pupils using AI usually added their own thoughts to what AI showed them, most (79%) did not do this when they used it for homework.[footnote 22] These surveys suggest that, although there is enthusiasm for AI among individual teachers and students, its adoption by school and college leaders is not keeping pace with teacher, pupil and learner use.
As the use of AI in education increases, we need to better understand how schools and colleges are using this technology and managing the risks it poses for pupils, learners and staff. We have to know what impact it is having on a range of outcomes. The DfE, therefore, commissioned Ofsted to carry out research on AI in education. This is intended to highlight the practice of early adopter schools and FE colleges in embedding safe, ethical and responsible AI use, which other educational settings can refer to, if and when they choose to develop their own AI practice.
Ofsted is gathering evidence about how AI is being used in schools and FE colleges. This research will add to Ofsted’s understanding of the mechanisms that support the adoption of AI. It will fill gaps in our knowledge around the use of AI and its impact on pupils and staff. Ultimately, it will inform our inspection and regulation practice, as well as the guidance and training we develop for inspectors in this rapidly evolving field.
The terms of reference provide further details on the aims and objectives of the study. The research covers the following areas:
- How are schools and colleges using AI in administrative processes and to support teaching and learning?
- What role do leaders play in embedding and supporting the use of AI?
- How have schools and colleges approached introducing and using AI, and what have been the challenges, barriers, successes and benefits?
- How are schools and colleges monitoring the intended and unintended impacts of AI?
- How are schools and colleges governing the use of AI and managing the risks to staff, pupils and learners?
Research methods
This was a small-scale qualitative study in 2 phases. In the first phase, we carried out a rapid review of the emerging literature on generative AI to identify its potential benefits in education and the main barriers and challenges schools and colleges may need to overcome. We also reviewed international legislation, policies and guidance related to AI in education, and spoke to international inspectorates and academics with knowledge of AI in education.
The literature review and conversations with experts helped inform the questions for the second stage of the study. This involved online interviews, in the spring term 2025, with leaders of ‘early adopter’ MATs, schools and FE colleges who had responsibility for the adoption of AI. These included leaders from maintained schools and academies and independent schools.
The leaders we interviewed were all enthusiastic about the benefits of AI and had decided to pilot generative AI in their school or college soon after ChatGPT was made publicly available. The interviews allowed us to capture their AI journey and, informed by the literature review, develop our understanding of the following:
- leadership of AI: the role of leaders in enabling and embedding AI use
- governance of AI: how leaders make sure staff, pupils and learners use AI ethically, safely and responsibly
- use of AI: how staff use AI inside and outside the classroom
Please see Appendix B for more details on the research methods and literature review.
Research literature
This section of the report outlines what we know from the literature about the potential benefits and disadvantages of AI in education as well the barriers and challenges to embedding and using AI tools. It also reports on how other countries are approaching AI and responding to some of these challenges.
What is artificial intelligence?
AI is not a new concept. The term was first used in the 1950s to describe machines capable of performing tasks that require human intelligence.[footnote 23] What is new is the introduction of generative AI and the public availability of generative AI tools such as Microsoft Copilot and ChatGPT.
AI is not a single thing. It is an umbrella term that describes a range of technologies and methods such as machine learning, natural language processing, data mining, neural networks and algorithms.[footnote 24] The UK government defines AI systems as ‘products and services that are “adaptable” and “autonomous”’.[footnote 25]
Generative AI refers to AI platforms designed to create new, unique responses to users’ requests. It uses a vast collection of works and other sources to create text, images and videos that can be indistinguishable from those created by humans.[footnote 26] However, generative AI does not understand anything. Unlike traditional search engines such as Safari and Google, it is a predictive tool. It generates an answer based on patterns in data and knowledge that it has learned, rather than by simply retrieving existing information. AI’s ability to create new responses makes it susceptible to generating inaccurate or misleading outputs presented as fact. These ‘hallucinations’ can happen when AI misinterprets data, has difficulty responding to ambiguous prompts, lacks sufficient context or uses biased data.[footnote 27] For example, if AI has learned that 85% of employees in a company are male, it will predict that the perfect new employee for the company should also be male.
Generative AI includes chatbots and virtual assistants. It has been developed to answer test questions, write essays, translate and summarise texts, provide formative feedback, and even generate lesson plans tailored to individual pupil needs. It can also provide individual teaching that adapts to pupils’ pace of learning and helps understand how learning happens.[footnote 28]
Potential benefits and risks of AI in education
One of the most frequently cited benefits of AI in education is that it reduces teachers’ workload. By automating typical teacher tasks such as lesson planning, marking and resource creation, AI can give teachers more time to concentrate on aspects of teaching that have the most direct impact on pupil engagement and learning, such as producing high-quality resources and learning materials.[footnote 29] A recent trial studied the impact on teacher workload of using ChatGPT for lesson planning and resource creation. The study found that AI could save the time teachers spent on planning and administration by 31%, which was equivalent to 25 minutes per week.[footnote 30]
AI-driven platforms have also shown that they can support formative assessment and feedback.[footnote 31] These systems mark pupils’ and learners’ work and give detailed individual feedback that helps them to understand their progress and areas for improvement. Evidence also suggests that intelligent tutoring systems and chatbots can analyse real-time data on pupil performance and engagement to tailor resources and adjust teaching to individual needs.[footnote 32] AI chatbots respond to users’ questions by generating verbal instructions similar to those a teacher might give to help solve a problem. Pupils and learners can use the real-time feedback to improve their work.[footnote 33] When they use generative AI for personalised learning, it increases their motivation and improves their critical thinking skills.[footnote 34] For example, pupils using chatbots can prompt generative AI with questions and then evaluate the different viewpoints and insights provided. Instead of giving pupils the answer, the AI will use pupils’ prompts to clarify, expand, elaborate, verify and put their knowledge into context.[footnote 35]
However, there are concerns about how valid it is to use AI for setting, marking and assessing exams. Two of the main concerns often raised are that there is a narrow range of acceptable answers, and that AI markers are not able to give reasons for decisions.[footnote 36] Research also shows that AI tools can replicate human marking bias and may struggle to identify high- and low-scoring essays.[footnote 37] AI is also particularly bad at identifying exceptional and original work and may not be able to correctly assess nuance and creativity.
The literature suggests that AI could have a positive impact on teaching and learning. However, there is currently little independent academic evidence of its actual impact on outcomes.[footnote 38] This is particularly the case with generative AI tools as their newness means most of the available information comes from developers themselves.[footnote 39] Systematic reviews of academic literature and research with teachers indicate that AI can enhance personalised learning and provide immediate feedback.[footnote 40] However, these sources also suggest that relying too much on it may make it harder for pupils and learners to develop essential skills, such as essay writing, and higher-order processes such as problem-solving, creativity and critical thinking.[footnote 41] Overreliance on AI-generated responses may also weaken pupils’ capacity to engage deeply with learning material and hinder their ability to retain knowledge in the long term.[footnote 42] It may also create an illusion of learning where pupils and learners produce more but understand less.[footnote 43]
Teachers have also indicated that they are concerned about how AI use may affect them. Using AI for tasks such as marking, resource creation and lesson planning could deskill them.[footnote 44] Teachers may find that AI tools produce similar outputs regardless of their students’ specific needs or the local curriculum goals.[footnote 45] Academic research also suggests that AI may be biased towards particular pedagogical approaches.[footnote 46]
The impact of AI is also felt beyond education. For example, the literature does not give enough attention to the environmental and ecological impact of AI’s high energy use.[footnote 47] There are also concerns about the lack of transparency around the algorithms and datasets that commercial AI tools use.[footnote 48]
The overarching conclusion is that despite literature indicating there is potential for AI to have a positive impact on teaching and learning, there is currently little reliable evidence available of its actual impact on outcomes.[footnote 49]
Barriers to AI adoption
The literature indicates that adopting AI into educational systems involves navigating a range of barriers and significant challenges. For instance, the use of AI by staff and pupils needs robust safeguarding and governance frameworks. This is because generative AI systems collect and analyse large data sets and there is significant potential for breaches of data privacy. Guidelines from the UK’s Information Commissioner’s Office stress the importance of ensuring that AI tools fully comply with data protection regulations such as the General Data Protection Regulation (GDPR).[footnote 50]
A further concern is the risk of AI perpetuating or even amplifying existing biases. AI systems rely on algorithms trained on historical data, which may reflect stereotypical or outdated attitudes.[footnote 51] Education providers need strong governance mechanisms to reassure parents, staff and pupils that AI tools have been evaluated for bias, data security and suitability for different demographics, before they are adopted in practice.[footnote 52]
Schools and colleges also need to be transparent about their use of AI. This includes informing parents, particularly if pupils are using AI. The DfE’s current position is that schools and colleges should establish clear accountability and transparency around the AI systems they use.[footnote 53] Surveys indicate that parental views about AI play a crucial role in its adoption in education. For example, although parents recognise that AI can help teachers and are happy for them to use it, they are uneasy about pupils using AI, particularly outside school.[footnote 54] Engaging parents in discussions about AI use in schools and increasing their understanding of how it can be used would help overcome some of these concerns.[footnote 55]
AI’s ability to adapt and personalise resources and learning material to meet the needs of individual pupils and learners can provide significant opportunities for schools and colleges to address educational inequalities. However, the ‘digital divide’ between those who have ready access to digital technologies at home and/or at their school or college or school remains a major barrier in realising this potential. Many schools, particularly those in deprived areas, do not yet have high-speed broadband and/or adequate access to digital devices needed to support AI.[footnote 56]
Bridging this divide would require considerable investment in digital resources and training to ensure that all pupils and learners benefit equally from AI. This is particularly important for those who may be doubly disadvantaged in not having access to AI at home or school/college. There is also the possibility of a digital divide between pupils and learners in schools and FE colleges where AI is embedded in teaching and learning and those in providers yet to adopt AI. Pupils and learners in schools and FE colleges yet to adopt AI may miss out on the benefits described earlier in the section ‘Potential benefits and risks of AI in education’.
Importance of professional development
There are several factors that influence the adoption of AI in education. These include the technological infrastructure in schools and colleges, and the types of support leaders put in place to address teachers’ concerns and increase their trust in AI outputs. Successive surveys of teachers by the DfE show that, although more teachers are using AI, they are concerned about the risks.[footnote 57] There are key distinctions between the risks associated with generative AI used in education and those related to EdTech use more widely, as discussed in the ‘Potential benefits and risks of AI in education’ section. These surveys indicate that teachers need a stronger emphasis on ethical considerations alongside an understanding of how AI tools work and how they can be used safely and responsibly. Professional development needs to address both the technical and pedagogical aspects of AI, so that teachers know how to critically evaluate AI-generated outputs and manage the ethical challenges associated with AI use.[footnote 58]
Decisions to use digital technology are often influenced by what teachers believe about its benefits and how confident they are in integrating technology into their practice.[footnote 59] In the case of AI, these beliefs are more complex because of teachers’ specific fears and misconceptions about the risks of AI.[footnote 60] Along with factors directly related to AI tools, teachers have indicated that things for schools and FE colleges to consider when introducing AI include:
- not increasing teacher workload or anxiety
- having support mechanisms in place
- building teachers’ knowledge of how to use AI
- addressing misconceptions about AI and its risks[footnote 61]
Teachers’ acceptance of AI is fundamental to its successful use. Evidence suggests that teachers need to trust AI and perceive its value before they are willing to incorporate it into their practice.[footnote 62] Teachers’ willingness to use AI is also related to how easy they think it is to use. Seeing practical examples of high-quality AI in use and the impact on pupils and learners before using it themselves can make teachers more enthusiastic. This may cause them to challenge their own digital practice and what constitutes effective use of AI in classrooms.[footnote 63] As teachers become more familiar with AI and see its impact, they become significantly more willing to embed it into their practice.[footnote 64]
Teachers’ beliefs about EdTech and its impact on learning also affect whether or not they use AI in the classroom, and how they incorporate it into their pedagogy. Teachers with first-hand evidence of how it enhances teaching and learning are more likely to experiment with AI and use it in the classroom.[footnote 65] On the other hand, teachers who are sceptical about its capabilities and concerned about potential negative consequences (such as decreased teacher–pupil interaction or a loss of critical skills) can be less willing to embrace it and may delay their adoption of AI. Schools and colleges with a supportive infrastructure that gives staff the time and space as well as the agency to explore the benefits of AI in their particular context are more likely to embrace AI tools effectively.[footnote 66]
One way to think about how teachers use AI is to apply the SAMR (Substitution, Augmentation, Modification and Redefinition) model.[footnote 67] This model has 4 levels of technology use – substitution, augmentation, modification and redefinition – which start from using technology-enhancing teaching methods and end with transforming the way pupils learn. When using educational technology early on, teachers often focus on the first 2 levels (substitution and augmentation), which involve replacing traditional materials with digital ones. Examples of this are lesson planning and adapting resources online, or recording lectures on video and making them available for asynchronous learning. At the last 2 levels, technology is used for tasks not previously possible, such as live interviews of AI-generated historical figures. Evidence from researchers and teachers suggests that AI is not yet being widely used to redefine teaching and learning. It tends to be used to automate existing practices rather than to develop new practices that only AI can support.
Two of the main reasons teachers give for not using AI are that they do not know enough about how to use generative AI tools, and that they are concerned about the risks.[footnote 68] Teachers must, therefore, receive comprehensive professional development so they have the guidance and support they need to use AI tools effectively and responsibly. This should address both the technical and pedagogical aspects of AI, so that teachers know how to critically evaluate AI-generated outputs and manage the ethical challenges associated with AI use.
International context for AI in education
International approaches to regulation and governance of AI in education vary considerably. Some countries have introduced new legislation to address the risks around AI use while others, such as the USA, rely on voluntary compliance with guidelines and self-regulation. Different approaches to regulation are driven by how governments believe AI development and innovation is best supported and encouraged.
In Europe, the AI Act is the first legal framework to establish rules for anyone using or developing AI tools for education. All AI in education is classed as high risk and regulated. Schools must show how they ensure that AI is used appropriately, monitor and record how they use it, and report serious incidents. EU legislation also emphasises transparency for staff, pupils and parents. If schools use AI to write reports or grade work, they must tell parents and pupils and explain the processes.
Singapore’s approach to AI regulation combines mandatory regulations with voluntary guidelines. There are no specific laws or regulations that directly regulate AI but regulators have set out compliance requirements for data, accountability, reporting incidents, security, transparency, research and testing that apply to education use. The focus of regulation is schools’ awareness of acceptable risk levels and how they address them. This approach to regulation offers both structure and flexibility.
The DfE has adopted a less prescriptive stance to AI regulation than the EU by building on existing frameworks and guidance for schools around data protection, safeguarding and intellectual property.[footnote 69] This aligns with the Department for Science, Innovation and Technology’s guidance for regulators and is designed to encourage innovation while making sure that schools and FE colleges still have robust governance structures.[footnote 70]
Estonia’s approach to regulating AI in education contrasts sharply with the EU. Use of AI in education relies on self-regulation. Ministry of Education guidance for schools on how to use AI is described as ‘suggestions’ rather than regulations. Safe, ethical and responsible use of AI is included in the digital competencies that are part of the Estonian national curriculum for all children from 4 to 18 years of age. The competencies cover all aspects of digital technology use including copyright, digital security, protection of personal data, privacy, and the environmental impact of digital technology. Schools are expected to teach and assess the competencies as part of their own curriculum.
Evidence from provider visits
The following section reports our findings from 21 online interviews with leaders who have embedded the use of AI across their MAT, school or FE college. Their views highlight the actions and decisions they took that have formed critical parts of their school or college AI journey.
Leadership of AI
Two aspects of leadership were suggested in all the interviews. First, senior leaders were committed to enabling AI adoption at a strategic level. Second, they had an ‘AI champion’ who had a passion for AI as well as the expertise and seniority to communicate the benefits of AI to other staff and senior leaders. One digital lead from a MAT summed this up as follows:
What you really need is someone with leadership responsibility. Someone who really has knowledge about what’s going on in AI in education. And then someone who can speak “human” as well, rather than “techie”. And if you’ve got those 3 ingredients, which some schools have, they’re the schools that are driving forward with this. But if you miss out the knowledge of what’s going on with AI, you start to increase the risk, or you don’t know the benefits of it. If you haven’t got leadership responsibility, you can’t drive it.
These 2 aspects of leadership created the foundation for AI adoption and also a culture that balanced safe and ethical use of AI with innovation. In some providers, these 2 roles were performed by 1 person. In others, often larger colleges and MATs, this role fell to more than 1 person.
The influence of an AI champion
The introduction and uptake of AI in many settings was driven not by strategic leadership, but by a teacher or leader who championed the use of AI. We often found that these individuals played a vital role in influencing leadership and inspiring staff to embrace AI in their practice. Importantly, the AI champions we spoke to had the essential knowledge and understanding of generative AI needed to convince senior leaders, including governors and trust CEOs, and staff to adopt AI. One champion, who was the director of digital transformation in an FE college, described their impact on governors and trust CEOs as follows:
I think the biggest change was when [I was] invited to show SLT [senior leadership team] and the governors how to use it. That was the turning point, where we all could recognise that this was going to have a big impact on teacher workload, but also on how AI could impact on teaching and learning.
This description is typical of how others described the relationship between the AI champion and senior leaders. They worked in tandem to implement AI and develop an AI mindset among staff.
Case study 1 – The role of the AI champion
This FE college has always had a technology-enhanced learning environment. A few months after the launch of ChatGPT, the lead practitioner for e-learning established themselves as the AI champion and sat down with the college principal to show what AI could do.
The meeting was only booked for 30 minutes, and I was in there for an hour and a half, just demonstrating the functionality and key support that AI can provide in the learning environment. One of the first things the principal said was, ‘This is an employability skill I need students to have.’
While some nearby FE colleges had decided to ban AI, the principal felt differently and was cautiously optimistic about exploring how it aligned with what the college already had in place as a Microsoft college.[footnote 71]
The AI champion gave staff the confidence and skills to understand how to use AI effectively. This includes understanding different curriculum areas and teaching staff ‘prompt engineering’.[footnote 72] As the champion moved around the different curriculum areas and teams, they tailored their training to individual staff, depending on the needs of skills and assessment methods for the learners. For instance, the champion would adapt their examples of what generative AI can do for learners going into healthcare, by teaching staff how it can support the learners with writing or adapting care plans for their patients. Teaching the staff the importance of knowing how to correctly prompt ChatGPT, and what knowledge staff needed to share with their learners, was a critical part of their role. As the AI champion told us: “If you put junk in, you’ll get junk out.”
Laying the groundwork for AI
Commonly, leaders made sure that they had secured staff buy-in for AI before introducing it across all the schools in a MAT or in individual schools and colleges. This meant having a strategy for addressing staff anxiety and fears about and for dispelling some of the myths associated with AI around job losses. They also invested in raising staff awareness about the risks and challenges of AI and its potential benefits.
AI champions were frequently leading on this. As teachers, rather than IT specialists, they understood the structures and mechanisms that would support staff to use AI effectively, especially what its capabilities were in relation to teaching and learning. For instance, they were well positioned to match AI tools to what teachers actually wanted to use AI for. Often, the starting point was determining what teachers needed help with and showing them how AI could be used for this purpose, rather than using AI more generically.
Furthermore, these champions created a ‘buzz’ around AI through their own enthusiasm and passion. They played a vital role in helping to demystify AI and demonstrating what it could do. They were able to create an ‘AI mindset’ where it became the norm for teachers to use it, rather than to see AI as a shortcut. As one college head of teaching and learning standards described, they made the use of AI contagious:
What we wanted is staff in the staff room to be able to go, ‘You know I’ve just created that on [our AI tool] and it’s brilliant’… and people will start discussing it with others and… they’ll go, ‘Right. I need to have a good look at this because I’m not using this [tool].’ That is what is starting to happen now and that’s working… we’ve got to allay fears around this.
Taking risks to innovate
Nearly all the leaders we spoke to had begun to research and learn about the potential benefits and risks of AI not long after ChatGPT became available to the public in November 2022. This meant they had a particular view on AI and where it sat in the current educational context. One headteacher from an independent school told us:
We’re only just at the start of this AI era and the biggest risk is doing nothing and assuming that you can just continue as is.
The speed of AI development after the launch of ChatGPT, and inconclusive evidence about its impact, meant these leaders typically took their time to research and understand AI and the different tools available. As one MAT academies director described it, their role was to make sure they saw beyond the hype of AI to decide what was right for their needs:
When products first came out, it’s like they’re sprayed in glitter and they look shiny, sparkly and wonderful, don’t they? So, everybody’s drawn to them… So, I guess it’s our job to make sure that we don’t fly over to the flashy or shiny, sparkly products.
Another MAT chief transformation officer expressed concern over developing their AI too fast, as with everything changing so quickly, there was a danger of ‘falling into the trap of doing something too reactive’.
Typically, leaders were managing the risks by adopting a cautious approach and making sure they had the groundwork in place for AI. Most had taken small steps and built a safe foundation, so staff and pupils were ready for AI. They prioritised the safety of staff and pupils and made sure AI was used responsibly and ethically. They used internal trials and staff working groups to help them understand which tools might meet a provider’s needs best. One MAT digital lead described it as ‘getting the fundamentals and the foundation pieces right and then that makes the next level journey much easier’. Another school headteacher highlighted how a process of ‘pre-mortem’ helped with developing their initial strategy for AI:
We did this thing called a pre-mortem. So, we [sat] around a table and acted as if we had already rolled out the project and it’s failed, and then we work backwards and talk about all the reasons why it failed. There were people in SLT [senior leadership team] that said, oh, well, the children were using it to cheat, or the children put inappropriate content into it, or the children were just copying out what it gave them and put it into their homework or into their books. That suddenly allowed us to strategise [and say] OK, let’s go back to the beginning, get back in the time machine, and fix it all. And that really worked [for us].
Something these early adopters had in common was leaders who showed an appetite for risk and who were willing to innovate, once the fundamentals were in place. A school headteacher said:
We have to be able to take a calculated risk. If we want to be innovators in this space, and we want to give our children the best experience, someone’s got to do it first. And whoever does it first is going to make some mistakes. So, we’ve accepted that, but [we place] pupils’ safety right at the centre. Everything else is up for grabs around that. That’s the way we’ve approached it.
This created a culture of openness and trust that gave staff ownership of AI use and encouraged innovation. Allowing staff to ‘experiment’ and ‘explore’ AI were core principles for these leaders. Additionally, leaders said that they encouraged staff to share and talk about their successes and challenges in using AI with other members of staff, which often helped to support further buy-in – as one school headteacher explained:
The culture that we’ve developed and grown is about it being purposeful and responsible and professional. And therefore, actually, we felt if we lock this down and say we’re only allowing you to do things that we think are beneficial, we’re not going to get this full scope.
An assistant head from an independent school also told us:
The idea at the moment is that we’re giving staff a real free rein to kind of explore what works.
Leadership structures
Leaders recognised early on that AI encompasses several different areas of knowledge. The size of the organisation tended to determine what this looked like in practice for each provider. However, the leaders we spoke to agreed that AI was not an IT- or curriculum-based solution. Adopting AI would have implications across curriculum, IT, safeguarding, data management and teaching and learning. This needed to be reflected in the leadership of AI and in who was involved in decision-making and development of a wider strategy.
The mechanisms and structures that supported the adoption of AI and encouraged staff buy-in varied by provider type. In large colleges and MATs, strategic leadership of AI drew on expertise from across the organisation. (See Appendix A for the varied roles of those leading AI in different providers.) MAT leaders described a clear structure which had leadership at different levels and provided day-to-day leadership of AI use by staff and strategic thinking about AI. MATs typically had a digital leader in each academy to support staff, often the AI champion.
In larger providers and MATs, AI leadership brought together their data management teams, IT systems managers and curriculum leads. This kind of structure recognised that, unlike other forms of technology, AI required skills and knowledge across more than 1 department. This kind of structure was harder to develop in standalone primary and secondary schools. In these smaller schools, senior leaders often established staff working groups to pool knowledge.
A strategy for AI
The leaders we spoke to emphasised that senior leaders needed to have a clear vision for AI and know how they wanted staff to use it as part of their own practice. As one school headteacher suggested:
It requires all the leadership team to be on the same page with what you’re trying to achieve. Do you want this to reduce teacher workload primarily? Do you want this to enhance teaching and learning primarily, or do you want it to be both prongs of that kind of AI journey? The emphasis here is on intentionality – knowing the “why” behind the adoption is just as critical as the “how”.
However, while leaders tended to have a clear vision for AI in the short term, very few had a longer-term strategy beyond the initial testing and piloting stage. When we asked about their strategy for AI, most talked about their reasons for adopting AI and developing the guidance, policies and mechanisms that ensured safe, ethical and responsible use of AI by staff and pupils. Very few leaders had established what they hoped to achieve with AI longer term, or what success with AI looked like beyond the initial piloting stage. For example, they had rarely considered what they wanted the impact on pupils’ learning, on teaching or on staff workload to be.
This short-term way of thinking was often related to the newness of AI and the pace of change. Despite being ‘early adopters,’ most of the leaders we spoke to were still learning about the technology itself and how it could align with existing practices, as this college deputy principal explains:
Because this is so new and we are learning day by day, it’s really difficult to see where that end is and what we want.
Others, such as this college principal expressed the same sentiment more bluntly:
I think anybody who’s telling you they’ve got a strategy is lying to you because the truth of the matter is AI is moving so quickly that any plan wouldn’t survive first contact with the enemy. So, I think a strategy is overbaking it. Our approach is to be pragmatic: what works for the problems we’ve got and what might be interesting to play with for problems that might arise.
This following comment by the same college principal paints a vivid picture of how schools and colleges are focused on keeping up with the pace of change and learning about AI before deciding if and how to use it across their school, FE college or MAT:
It’s the Wild West and all we are at the minute is the sheriff. What comes in and what goes out of the town is what we’re managing to deal with at the minute. Who’s a useful citizen, who’s not a useful citizen, is what we’re making the determination of. Once that Wild West has become more of a frontier town, you can start to make informed choices.
In contrast to this short-term way of thinking, we also spoke to 5 providers who were developing pupil chatbots. These included 2 MATs with a strongly centralised curriculum and pedagogy. They saw chatbots as a way to maintain quality and consistency across the trust for improving attainment. However, they were still in the early stages of deciding how and what to evaluate as evidence of success.
Case study 2 – AI leadership in a multi-academy trust
This MAT, which includes both primary and secondary schools, became a Google academy in 2015. The Covid-19 pandemic triggered trust leaders’ decision to become fully digital. They gave pupils and staff access to Google classroom and provided all pupils with a Chromebook or an iPad. The trust also appointed a member of the senior leadership team in each academy as a digital lead and appointed a Google trainer in each academy. There was also at least 1 digital champion in each academy. Digital leads supported the implementation of the trust’s digital strategy, and the digital champions were the ‘voice pieces to champion the software and the technology’. Google trainers worked alongside the digital leads to support technology adoption and teacher training. This digital leadership meant there was a structure in place to support the adoption of AI.
After the introduction of ChatGPT, leaders realised they had to move quickly to have a position on AI and began to look at their existing systems and digital strategy. The academies’ director and chief information officer were the driving force behind AI adoption. They started talking about AI at termly meetings of academy principals and senior leaders. They aimed to dispel myths, explain what AI was, highlight its advantages and disadvantages, and demonstrate how it should and should not be used.
At this stage in their AI journey, trust leaders blocked staff access to AI tools to give themselves time to use and understand the technology before rolling it out across the MAT. They also created an AI strategy working group to test several AI products before deciding which to adopt across the MAT. The tool they chose ‘was nicely packaged for teachers and had in-built training and resources’. It was shared with principals and senior leaders before being launched across the trust for all staff to access. Staff were encouraged to explore and play around with the tool and learn how to use it by themselves. It was promoted by a ‘tip of the week’ for the tool and weekly staff bulletins. It was also referenced on any occasion where staff came together.
Trust leaders also created a digital toolkit with information about other tools staff could use which are ethical, and GDPR- and copyright-compliant. The trust also tracked the use of individual AI tools and surveyed students and teachers about their use of AI twice a year. This helped with assessing where the trust was on their AI journey and the impact it was having on education and workload.
Leaders were conscious that students were probably already using AI tools outside of school, and that their knowledge was potentially further ahead than that of the teachers. To help mitigate this, leaders decided that all staff in the academies should be taught how to use AI and also learn about compliance and good governance for the tools. Teachers were also trained on how to understand and detect students’ use of AI.
Governance of AI
The DfE has published guidance about safe and responsible use of AI.[footnote 73] However, schools and FE colleges in England can set their own rules for AI use, as long as they follow legal requirements around data protection, child safety, and intellectual property. All the leaders we interviewed had prioritised safe, ethical and responsible use of AI. Most had comprehensive policies and procedures in place to address the risks to staff, pupils and learners.
Leaders had researched the benefits and risks of AI before proceeding with any development of AI. They were aware of the risks to pupils and staff around bias, data protection, intellectual property and safeguarding. Some leaders told us they had set up their own AI strategy groups to test new AI tools before distributing them more widely to staff. They told us these groups helped to determine whether a specific AI tool was the best available product to use, the risks of the tool and how to overcome these risks. A few leaders also mentioned that they felt these groups drove product use – particularly in approving what platforms teachers and learners can use – as well as maximising the gains of any products they were using.
However, there was no clear consensus about what to include in a policy, or whether to have a separate AI policy. The policies these providers were using tended to perform several functions. They specified guidance and responsibilities for AI use by staff and students, described safe ethical use of AI and provided information for parents. Some leaders had decided to incorporate AI use into existing policies. These tended to be the providers’ acceptable use policy, staff codes of conduct, teaching and learning strategies, and the safeguarding policy.
AI policies
Several providers were considering whether a separate AI policy might be needed in the future as their AI journey matured. One head of school curriculum said they would eventually need a separate policy to make sure that everything was covered:
So, AI is embedded, but I think that as we progress with our journey, I think we probably do need our own school AI policy. I think that’s probably what the majority of organisations moving forward will have to have because we need to be aware of, you know, hacking and in terms of GDPR. All of those things we need to encompass and consider.
However, most leaders said they found developing policies for AI quite a challenging area, largely because of the pace of change in the sector. This meant leaders were being constantly vigilant and forward-looking. Regularly reviewing their AI policies was, therefore, an important principle of governance. Several leaders said they were reviewing and updating AI-related policies at least termly, if not more often. For example, one primary school leader said they regularly reviewed their AI policy to check that it was still strong enough and that it addressed key issues adequately.
With policies in place, ultimately, it was the responsibility of leaders and staff to make sure they and their students used AI safely, ethically and responsibly and understood the risks. As one independent school deputy head explained:
Your responsibility is: know how it works, including biases and the ethical issues around how training works and how it produces responses. Be honest when you’re using it. And be responsible for whatever you create.
Keeping staff and pupils safe and using AI tools ethically and effectively required openness, robust procedures and policies and effective training. It also needed providers to create a culture that made it clear that, when it comes to safeguarding, AI was everyone’s business. As one leader said, ‘The technology itself is not inherently unsafe, just the way it is used.’
Transparency
Among the challenges leaders described was the pace of change. They had to balance the speed at which new AI tools were launched and staff and students wanting to use them with safety and security.
Many new tools are emerging, and the technology is developing at great pace. We want to ensure we are moving at the right speed to benefit our staff and students while also ensuring their safety, the security of our systems and data, and being aware of the ethics of using these emerging technologies.
However, in a few cases, the governance around AI proved to be an inhibitor that affected the speed of uptake. For instance, one headteacher told us they were initially hesitant to make AI a formal part of their teaching and learning policy due to the fast-changing nature of the technology. Instead, their priority was to keep a close eye on safeguarding, security and online safety. Other leaders told us that they encouraged a culture of openness to mitigate the risks. If staff and pupils were talking openly about the tools they were using and how they were using them, leaders could make informed decisions about the risks and how to mitigate them.
Importantly, these leaders did not want to stop staff from innovating and experimenting, but they needed to be sure that the AI tools being used were safe and appropriate. Two providers had AI tools approval committees, and a list of approved tools staff could use. These committees included IT (network managers) and teaching and learning leaders who considered both data compliance and pedagogical principles to confirm the tools had educational value. In one college, the teaching and learning, IT and GDPR teams had a monthly ethics meeting to discuss requests to use new AI tools and decide on whether to approve them for staff to use. Others approved only tools they knew were safe.
Two of the leaders we spoke to suggested that the government needed to provide more guidance and support around safe AI tools and also shift the focus to learning. One independent school leader said:
What we want is safe, useful technology that doesn’t undermine the learning process. And I think there’s a huge amount of focus on the safe and useful bit and not enough focus on the not undermining learning.
Research has shown that dependence on AI tools might hinder the development of pupils’ critical thinking and problem-solving skills if they are not used effectively.[footnote 74]
Case study 2 (continued) – Updating policies for AI use
When staff were given access to AI tools within the trust, leaders began to work on their strategy and consider governance around how best to use them. In the same way they had created a digital strategy, they also developed an AI strategy. This enabled the trust to be clearer in their communication with their academies. Following this, their academies were also told to update their honesty policies around producing exam work. This generated policies to help the academies know what they should and should not do around students’ use of AI. Guidance was also created on how to use AI within the MAT. There were 3 versions created, communicating the different types of tools, and spelling out what to do and what not to do.
Using AI inside and outside the classroom
Most of the leaders we interviewed were not prescriptive about the AI tools teachers could use and how they should use them. Some had piloted different tools with staff before deciding which to buy licences for. Others had a system to approve the AI tools staff wanted to use.
Adoption of AI was split equally between those who gave teacher workload as the main reason for its adoption and leaders who prioritised pupils. We have used the idea of teacher-facing and learner-facing AI tools to describe how providers were using AI and the types of tools they used.[footnote 75] Teacher-facing AI tools support teaching and are used by staff for lesson planning, creating resources and suggesting activities. They can also support with administration and give personalised feedback. Learner-facing tools are used by pupils themselves and include intelligent tutoring systems and AI chatbots.
Teacher-facing and pupil-facing AI tools
We found that leaders were mainly using teacher-facing tools to reduce staff workload. This was most likely to be the case where schools had more recently decided to adopt AI.
Although leaders talked about reducing teacher workload, they often qualified this by adding that it was not about reducing teacher workload overall but increasing the time teachers could spend on the things that had a more direct impact on learning. For instance, AI allowed staff to focus on the ‘human bits’ of education that technology cannot easily replicate. A secondary school principal explained:
We just want to redistribute where that time is. So those admin staff at the front, I’d rather them not spend 2 hours spell-checking a policy or changing dates. I’d rather them be proactively chasing poor attenders, making phone calls, [doing] home visits, doing the human bits.
All the leaders we interviewed said that they used teacher-facing tools to reduce workload for both teaching and administrative staff. Commonly, teaching staff used teacher-facing tools for planning lessons, adapting resources and creating quizzes, and revision help. For example, the assistant headteacher of one school said the top categories their teachers used in their AI tool were ‘help me write’, ‘slideshow’, ‘model a text’, ‘adapt a text’, ‘lesson plan’ and ‘resource generation’.
Many described AI being used by administrative staff to reduce time spent on tasks such as writing letters to parents, summarising long documents or updating policies. One leader described using AI as ‘another person to bounce off’. They used AI to review or proofread letters, reports and other documents rather than asking another member of staff to do this.
Leaders were also clear that, although AI can help reduce workload and save them time, it still needs human oversight to quality assure its outputs. Some also told us that they are mindful that AI was not always the best option and that it ‘doesn’t quite replace the expert’. Teachers needed to use their professional judgement and pedagogical knowledge to decide where AI could enhance teaching and learning and where it might not.
Providers who had longer experience of using AI were more likely to be using it with pupils. Most often, we found that teachers modelled its use rather than allowing pupils to use it themselves. Leaders told us that, where teachers were using AI in the classroom to generate outputs, they could use this as an opportunity to develop pupils’ digital literacy. It was also a way for pupils to see first-hand some of the risks of using AI when users didn’t understand how it works, and to critically evaluate its outputs. In this example, a primary school headteacher described pupils’ response to AI generated images of doctors.
It produced 5 images of a doctor that were all white and all male. We just asked the children, ‘Tell us what you see.’ And, actually, some of the children didn’t have a clue. They didn’t clock on to it because of their own bias in their head – a doctor is a white male – but a couple of children said, ‘Well, there’s no women in here and there’s no one that looks like me.’
This demonstration of AI use in the classroom provided the catalyst for critical discussion about bias and misinformation.
FE colleges were more likely to permit learners to use AI themselves because they were older. Primary schools had not yet reached the stage where they allowed pupils to use AI independently. The only exceptions to this were schools that had developed their own chatbot and determined it was safe for pupils under 13 to use.
Personalising and adapting teaching
Most of the leaders mentioned AI’s ability to adapt and personalise resources as one of its strongest benefits. Several leaders talked about using AI tools to adapt lesson resources to make them more accessible for different groups of students, particularly those who have a special educational need and/or speak English as an additional language. As this school headteacher described:
It’s how are we enabling them to all access the curriculum and get that real quality teaching experience. As well as keeping the teachers still smiling and able to have a bit of a weekend.
During the interviews, we heard several clear examples of how learning was being personalised for specific groups of students. For instance, a lead practitioner from one college described how AI was allowing Syrian students who spoke English as a second language to access the curriculum. Teachers used AI to translate and adapt lesson resources such as PowerPoint slides and assessments. They also generated a glossary of terms in Arabic to give students a ‘leg up’. The lead practitioner for eLearning highlighted how:
It just levels the playing field and allows them to progress through the college rather than because that lecturer doesn’t have that skill, which maybe the ESOL [English for speakers of other languages] lecturers do, and ChatGPT gives them that.
Leaders from another college mentioned that staff were using AI to help young carers catch up on lessons they had missed because of their caring responsibilities. In this example, generative AI had created a 10-minute podcast from teaching slides and materials used in a full lesson. This included AI-generated voices talking about the lesson content. The idea was that pupils could fit the podcast around other responsibilities or listen while travelling, such as when they were going to college on the bus.
In a further example, leaders from a secondary school told us they were training teaching assistants to use AI tools to help the pupils they supported. For this purpose, every teaching assistant had a laptop they could use to adapt resources and learning to an individual pupil’s level of understanding and/or need.
When the teaching assistants put in the pupils’ learning needs and level of understanding of a topic, the AI tool they were using was able to adapt worksheets, learning objectives and success criteria to make it easier for pupils to access learning at their level.
However, as the literature review indicates, there is a lack of research that identifies the most effective ways AI can be used to adapt and personalise learning.[footnote 76] We have found from our curriculum research reviews[footnote 77] that adaptations can be ultimately unhelpful where they provide ‘workarounds’ to immediate barriers, but fail to address these barriers so pupils can access the curriculum in full and in the long term. For instance, if all resources for a pupil are adapted to their current reading age, this could widen gaps between them and their peers. Likewise, evaluations need to determine if bitesize lessons in alternative formats ensure that intended concepts are still learned in full and avoid producing misconceptions. If we do not scrutinise adaptations in this way, then AI use for personalised learning could simply worsen issues around lowering expectations for some students. This raises the need for providers to evaluate the impact of AI on pupil outcomes and monitor how it is being used to support learning.
Pupil learning about AI
All the leaders we spoke to said that teaching students how to use AI safely was one of their top priorities. This was often because they were concerned that they could not control pupils’ and learners’ use of AI at home. One leader told us that pupils were shocked to learn about ‘deepfakes’ in particular.[footnote 78] Pupils had not realised that what they saw on social media could involve elements that were AI generated, even though they may look real.
Therefore, curriculum development was an important part of the AI package offered by these providers. Teaching pupils and learners how AI works and how it uses their data, and raising awareness of bias and misinformation, were seen as important parts of pupils’ and learners’ digital literacy and safeguarding. Some providers addressed safe, ethical use of AI through their interactions with pupils. Others had developed specific teaching units as part of their computing or personal, social and health education (PSHE) curriculum. These explained how different types of AI work and how they use data to generate the different types of outputs pupils had seen and used. Teaching about AI often covered topics such as deepfakes, safe use, hallucinations and the need to critically evaluate what AI generates.
Developing chatbots
A few leaders described how that were developing and using their own bespoke chatbots for individual pupils to use to support learning. These generated AI responses by drawing on the background curricula, pedagogical approaches and intellectual property that each school, college or MAT had. The chatbots were also being tailored to particular attainment challenges.
One of these leaders described AI chatbots as ‘second teachers’ in the classroom. The chatbot was providing real-time assistance and feedback on assignments as well as responding to queries. Pupils were told that the chatbot was not replacing teachers, but ‘replacing where the teacher isn’t’. Pupils could use the chatbot if they were stuck or wanted to try out different ideas and get feedback. The chatbot was designed so that it would not give pupils a direct answer, but it could help them understand why something was wrong, or what question they had to ask to get to the right answer.
In another MAT, leaders were beginning to think about how technology and AI could support their adopted pedagogical principles. They were starting to match the technology to those strategies, as this MAT Chief Transformation Officer explained.
So, if you’re going to do modelling, this is how you could do modelling in a technology-enabled way. If you’re going to do a “think, pair, share”, this is what you could do. We’re now starting to map AI-type tools on to that so that it’s deepening the opportunities.
Case study 3 – AI chatbots in a primary school
This 2-form-entry primary school with a higher-than-average proportion of disadvantaged children started its generative AI journey in the summer term of 2024. The headteacher had a prior interest in technology and a few professional connections. Through one of these connections, the school was invited to trial and test an AI platform and chatbot that was designed and built in collaboration with teachers, and considered safe to use with children.
Before participating in the trial, and with safe use and safeguarding at the forefront of any decisions made around AI, the headteacher ensured all staff received training on AI through an inset day focused on the topic. Similarly, before the chatbot was introduced to children, the leadership team established a code of conduct and code of ethics around AI use. Pupils were also taught about the risks and benefits of using AI, along with what it was, and how it worked as part of their curriculum. This was done by teachers demonstrating on screen in front of the whole class how ChatGPT could help with writing or maths, and children taking turns to ask the AI questions.
Once leaders were confident pupils understood how AI worked, Year 6 pupils were introduced to the chatbot and given the opportunity to ‘talk’ to it and find ways of using it that made sense to them. They did this through structured time on their laptops, where the teacher would first model use before pupils would try to apply this independently. Pupils were encouraged to first ask the chatbot questions they would normally want to ask a teacher, to see what answers it gave. If the response did not feel right, they were to let the teacher know. Staff noticed that, for the chatbot to be effective, pupils needed to use specific and well-thought-out questions and prompts. Limitations in pupils’ spoken and written language made a big difference to what they got out of using it. As the headteacher explained:
[It] was a learning process for us as teachers to understand that actually this technology is a little bit different to what we’re used to… In Google you can just type out a word and it will check out a lot of stuff and you can pick [what you want] from that. But with an AI bot, you’ve got to be really specific and purposeful about what you’re asking it to do.
This meant that teachers could link AI to the development of pupils’ oracy and literacy skills. A few months after the pupils had begun using the chatbot, leaders believed that they had started to see a positive impact on their metacognitive skills.[footnote 79]
The school told us the next step is to trial the chatbot in other year groups, once pupils are taught about the risks and benefits and feel ready to do so.
Assessment and feedback
Several providers told us they used AI for marking, assessment and feedback. In one school, teachers used AI to give feedback on essays, using specific criteria decided by the teacher or set by exam boards. This school also used an AI tool that produces PowerPoint outputs to create ‘low stakes’ testing and quizzes from slides, websites, YouTube videos or PDFs. The AI marked as pupils answered the questions.
However, leaders tended to be more cautious about using AI for assessment compared with other ways of using it. First, they had concerns about accuracy. Second, some felt pupils wanted to know their work had been marked by a human. And third, there was a worry that using AI for this purpose may result in teachers becoming less well aware of the students and their work – as this MAT academies director describes:
We’ve played around with putting English assessment objectives in [to the AI tool] and then an essay in, and [asking] ‘Can you mark it?’, and ‘Tell me what you think’. And it’s not bad, but I think that’s a bit where teachers are most reticent because [they] want to know that they’ve looked at the work.
Generally, leaders were clear that using AI should never result in taking the assessment role away from teachers. Most emphasised that AI should only be used as a supportive tool, with the teacher still the expert at either end of the process. Several also mentioned that it was important that teachers were transparent about when they had used AI for assessment purposes. This was so that parents and pupils were fully informed and could voice any opinions on its use.
Case study 4 – Using AI for feedback
Learners at this FE college have been told they can use AI to mark their assignments after submitting them, to see what feedback it gives. However, leaders have told learners they need to be mindful that AI may not always produce an accurate response. The college is also aware that their learners may sometimes use AI to help write their assignments. Leaders felt greater transparency was needed around how learners had used AI. They suggested some areas may need to change feedback sheets so that the focus is on getting pupils to explain how they had used AI rather than confirming whether or not they had used it.
There is also a tutorial program that runs across the college. It trains learners on appropriate use of AI and includes different ways they can use word prompts. This is considered an acceptable use of AI, as it helps to support the learning process. Leaders also described how learners can use AI to proofread or give feedback on how a piece of writing flowed.
What next for these providers
Several leaders we spoke to identified 2 aspects of their AI use that they had either not yet thought about or were just beginning to discuss. These were the pedagogical uses of AI, and how they evaluated the impact of AI.
As our study has already indicated, the first stage of AI adoption for these leaders focused on exploring the technology, before they decided how it might support specific learning outcomes and goals. They made sure teachers understood what AI was capable of and where it could potentially enhance learning and address specific challenges. They also prioritised safe, ethical and responsible use at this early exploratory stage.
Systematic thinking about integrating AI into curriculum and pedagogy was still at an early stage and was often a second stage of their journey. We found most leaders had not, as one MAT leader described, ‘thought systematically enough about how to support pedagogy through technology in this new way yet’. Leaders in 2 MATs described how their trusts were beginning to think about where AI could integrate with their own pedagogy. However, one barrier they highlighted was that available AI tools did not have a contextual understanding of their school or college and the needs of students. As this MAT chief transformation officer told us:
The OpenAI-type chatbots – ChatGPT, Gemini – do not know your curriculum. They do not know where your pupils are. They do not know what the misconceptions are typically for those topics.
This leader also explained how for MATs with a centralised curriculum and pedagogical principles, the benefit of open AI tools was still limited because:
It doesn’t have the background context, and particularly the sort of curriculum, content and pedagogical approaches of our trust, which is very well defined.
Evaluating the impact of AI
We also asked leaders how they evaluated the impact of AI and understood what successful use of AI looked like. Most relied on feedback from staff and students or tracked and monitored staff usage of AI tools, rather than collecting data that could be used to measure the impact of AI specifically on pupils. Leaders told us they used low-stakes quizzes and tests to assess the impact of AI on pupils’ ability to retain and recall knowledge. However, it is not always possible to evaluate the extent to which any impact was due to AI, rather than to any other factors such as pupils’ prior knowledge, or the teaching approach.
Leaders also used direct feedback from pupils to assess the impact of using AI. One school leader had begun to include questions about AI use in their termly pupil survey. This asked questions such as: Do you see the purpose of AI? How are subjects using AI? And do they see any impact from using AI? Leaders also used staff surveys to understand the impact of AI on workload.
There is a lack of evidence about the impact of AI on educational outcomes or a clear understanding of what type of outcome to consider as evidence of successful AI adoption. Not knowing what to measure and/or what evidence to collect makes it hard to identify any direct impact of AI on outcomes. One school leader said that they had steered away from hard measurements of AI and its impact. They felt that this kind of accountability measure could potentially restrict staff from using and experimenting with AI. For the leaders we spoke to, success was linked to a coherent approach to introducing, using and embedding AI across the school, college or MAT. The impact of AI at the early stages of their AI journey was seen through the eyes of those using it. A positive impact was when they felt it was a useful tool and did what they wanted it to.
A few leaders who were using pupil-facing AI tools had conducted more formal evaluations of their impact on pupils. However, these collected qualitative data to understand the impact of AI on pupils’ metacognition, critical thinking and independent learning rather than measuring what pupils know, understand and can apply. As this primary school headteacher explained:
You can’t really measure how much it’s moved learning forward, but what you can do is measure some qualitative aspects of what it’s doing for the children in terms of their teaching and learning.
The lack of evaluation focusing on pupil outcomes could also be because many of these leaders were piloting and experimenting with AI as a tool to reduce teacher workload. For these early adopters, there is still some way to go if AI is to achieve its full potential and go beyond the hype and hyperbole. As one MAT chief executive officer noted:
I don’t want people to go away with the idea that we’ve got AI nailed and everybody’s using it as a tool to change, because in fact what I’ve learned over 30 years of using tech in education is there’s bandwagons… I don’t think AI is like that, but I think there’s a lot of hype around it and a lot of misunderstanding or myth and rumour about it.
Conclusion
Educators and policymakers have talked about the potential of all forms of digital EdTech to revolutionise education for at least 25 years, and UK governments and schools have invested heavily in software and internet connectivity.[footnote 80] This study has highlighted the journeys that 21 providers have been on as early adopters of the most recent form of EdTech innovation, namely AI. It provides information on the systems they have established and the barriers they have overcome to use AI in what they believe are safe and secure ways, while also being innovative and flexible in meeting the needs of staff and pupils alike.
Our study also indicates that these journeys are far from complete. The leaders we spoke to are aware that developing an overarching strategy for AI and providing effective means for evaluating the impact of AI are still works in progress. The findings show how leaders have built and developed their use of AI. However, they also highlight gaps in knowledge that may act as barriers to an effective, safe or responsible use of AI.
More research and evaluation of AI in education is required, specifically on what works effectively to achieve gains in knowledge and influence pupil outcomes. Many of the concerns around AI, particularly views about its impact on education, and potential threat to teachers’ professionalism and pupils’ knowledge, are not new. They have been raised in relation to EdTech more widely. Even 15 years ago, some believed EdTech had the potential to transform learning, and others felt there was a need for greater scrutiny of its ability to improve pupil outcomes.[footnote 81] However, some of the specific aspects of AI, such as its ability to predict and hallucinate, and the safeguarding issues it raises, create an urgent need to assess whether intended benefits outweigh any potential risks.
The findings from this research have also been helpful to inform Ofsted’s own position on AI during inspection. The use of AI is not a stand-alone part of our inspection and regulation practice, and inspectors do not directly evaluate the use of AI, nor any specific AI tools. However, inspectors can consider how AI is used across the provider and its impact on the outcomes and experiences of pupils and learners. They should expect that pupils or staff members may be using AI in connection with the education or care they receive or provide (for example to help pupils complete homework). There is no specific expectation that schools and FE colleges will use AI. However, the government is keen that they adopt and embrace AI as set out in its AI Opportunities Plan.
The findings from this research will also inform the inspector training we aim to develop this summer. It will help us make sure that inspectors can record the impact of AI, how this is monitored, and the checks and balances leaders have put in place to ensure AI is used ethically and safely.
We are aware that the experiences shared by these early adopter schools and colleges do not reflect how the wider sector is using AI. More research is needed to better understand how schools and FE colleges who are earlier on in their journey of AI adoption are using it, and the implications for our inspection and regulation practice. We also want to reassure ourselves that, when schools and FE colleges do use AI, it is in the best interests of children and learners. We want to enable AI innovation in the sector, and this small-scale study has been an important first step in that direction.
Appendix A: overview of participants
Provider name | Provider type | Age range | Start of AI journey | Participants (job title) |
---|---|---|---|---|
College 1 | FE college | 16 to 19+ | Nov 2023 | Deputy Principal, Head of Digital Learning |
College 2 | FE college | 16 to 19 | Feb 2023 | Head of Teaching, Learning and Digital |
College 3 | FE college | 16 to 19 | Jan 2023 | Vice Principal, Deputy Chief Executive, Lead Practitioner: eLearning |
College 4 | FE college | 14 to 19 | Nov 2022 | Principal, Director of Digital Transformation |
Independent school 1 | Independent school | 13 to 18 | Sep 2023 | Head of Digital Teaching and Learning |
Independent school 2 | Independent school | 4 to 18 | Nov 2022 | Assistant Head – Staff Development, Head of IT |
Independent school 3 | Independent prep school | 4 to 11 | Nov 2022 | Deputy Head – Innovation and Partnerships |
MAT 1 | Multi-academy trust | N/A | Feb 2023 | Digital Lead, CEO |
MAT 2 | Multi-academy trust | N/A | Mar 2024 | Chief Transformation Officer |
MAT 3 | Multi-academy trust | N/A | 2015 | Chief Information Officer, Academies Director |
MAT 4 | Multi-academy trust | N/A | Jun 2023 | Project Manager, EdTech and AI Lead, Director: Curriculum and Assessment, Digital Lead, Director of Teaching and Learning, ICT Infrastructure Architect |
Pilot 1 | Academy converter | 11 to 18 | Sep 2023 | Headteacher, Assistant Headteacher |
Pilot 2 | FE and HE college group | 16+ | Sep 2023 | Deputy Head of Digital Innovation, Change and Transformation Manager |
School 1 | University technical college | 14 to 19 | Academic year 2022/23 | Head of Social Sciences, Principal, Assistant Principal/Designated Safeguarding Lead |
School 2 | Pupil referral unit | 11 to 18 | Summer 2023 | Head of school: remote and outreach, Head of School Curriculum |
School 3 | Academy sponsor-led – multi-academy trust | 11 to 18 | Summer 2023 | Principal |
School 4 | Voluntary-aided school | 4 to 11 | Summer 2023 | Digital Leader, Headteacher, Upper Key Stage 2 Phase Leader |
School 5 | Community school | 3 to 11 | Nov 2023 | Assistant headteacher, Year 5 teacher, Headteacher |
School 6 | Academy sponsor-led – multi-academy trust | 2 to 11 | Summer 2023 | Headteacher and Designated Safeguarding Lead, Year 3 teacher |
School 7 | Community school | 3 to 11 | Summer 2024 | Headteacher |
School 8 | Academy converter – multi-academy trust | 2 to 11 | 2023 | MAT CEO and Executive Headteacher, English Hub and Professional Development Lead |
Appendix B: detailed research methods
This was a small scale in-depth qualitative research project commissioned by the DfE. The aim was to understand how and why early adopter schools and FE colleges had embedded AI. There was no intention to assess the impact of AI or evaluate the quality of AI tools they were using. The aim was to understand the journeys these schools and FE colleges have been on and the practice they have developed, and to share this with other schools and FE colleges who may have an interest in adopting AI.
Our high-level research question was:
How are schools and FE colleges that are early adopters of AI using it to support teaching and learning as well as to manage administrative systems and processes?
Data collection
We collected evidence from:
- a rapid evidence review of peer-reviewed academic literature – we prioritised recent systematic reviews and meta-analyses that included research on generative AI
- a review of DfE publications relating to AI, including policy statements, surveys, research and guidelines
- discussions with international inspectorates about their approaches to inspecting AI; where possible, we also reviewed their publications relating to AI
- 21 online interviews, including 2 pilot interviews, with leaders or those with specific responsibility for developing and implementing AI from a purposive sample of schools, FE colleges and trusts who have already adopted AI
The data we collected from the review of publications and discussions with international inspectorates informed the questions we asked during the interviews.
The interviews were carried out by 2 of His Majesty’s Inspectors and 1 Ofsted inspector between January and February 2025. They were semi-structured and conducted online. We asked providers for a maximum of 3 people to join the interview. We carried out 2 pilot interviews to check the validity of our interview questions. The data collected from the pilots is included in our final analysis, as there was little change between the questions asked on the pilots and in the remaining interviews. Each interview was 90 minutes long and broadly covered 3 main topics:
- strategic leadership and oversight
- governance and safe and ethical use of AI
- how AI is used by teachers and pupils/learners
We carried out the research in line with Ofsted’s research ethics policy and it was approved by our research ethics committee.[footnote 82] All participants gave us their full consent to be involved in the research. We used AI to produce a structure and early first draft of the literature review based on the project research questions and the literature identified by researchers working on the project. The AI draft was rewritten and edited by humans. External experts reviewed the final report and provided feedback on the validity of our findings and the literature review.
Sample selection
The aim of the online interviews was to illustrate what some leaders had put in place to support AI adoption and use. This was so we could identify common strategies that appeared to be effective, and which could be used in similar contexts. We therefore selected a purposive sample of settings to invite to interview.
We have defined early adopters as MATs, schools or FE colleges where leaders have been supporting and embedding the use of generative AI by staff for at least 12 months. In these settings, AI is used by staff to enhance teaching and learning and streamline administrative processes and procedures. For most of these schools and colleges, their AI journey began a few months after the launch of ChatGPT when leaders saw early on the potential for generative AI to reduce teacher workload and/or support pupil learning.
We carried out a range of validation checks to make sure these settings were early adopters of AI. We sourced participants from DfE recommendations, our own regional intelligence, research team contacts, recommendations from leaders we interviewed, and the AI in education website. We further validated participants by looking at what they had published about AI on their website and asking contextual questions in the email we sent inviting leaders to take part in the project. The final selection was also determined by whether leaders were available and/or willing to be part of the research.
As our focus was on schools and FE colleges that had already adopted AI, we were not concerned with identifying a nationally representative sample. However, we did want a varied mix of provider types within the purposive sample selected. This was to help identify any similarities or differences among provider types in the way they were adopting generative AI and to highlight any innovative practice to share with the sector. We included independent schools in the sample because we felt that their access to more resources might mean they are further ahead in their AI journey than government-funded schools. The final sample included 5 primary schools, 3 secondary schools, 1 pupil referral unit, 5 MATs/college groups, 4 FE colleges and 3 independent schools.
Data analysis
We received participants’ consent to use Microsoft Teams to record the video and audio from the interview. Microsoft Teams was also used to transcribe the audio. We analysed the data using a thematic approach and coded data using MaxQDA. The project lead developed an initial coding framework using the themes identified in the literature as well as additional themes developed during phase 1 of the research. New codes were then added based on the interview data. The data was coded by 2 researchers, and a sample of the coded entries was checked by the project lead using the coding framework. Any proposed changes to the framework were discussed and agreed by researchers before being added to it.
Limitations
Our study has limitations in that we spoke with a purposive sample of mostly senior leaders about AI use. This gives a top-down view of the intended implementation of AI and how well they think their school or college has managed this. The decision to interview leaders and not include other staff members and pupils was determined by the nature of the research as well as time and available resource. This was a small-scale, fast-paced exploratory research project. Gathering a range of views from pupils, learners, teachers and other staff members would provide additional perspectives on what is and is not working. It would also help us understand how to implement AI more effectively and assess its impact.
As enthusiastic early adopters of AI, the leaders we spoke to are not representative of those in most schools, FE colleges or MATs. The majority of education providers have yet to adopt AI. Further research with those yet to adopt AI, and/or who are not considering using it, would help us understand better the barriers to more widespread use of AI across different types and phases of education.
-
‘Milestones for mission-led government: break down barriers to opportunity’, Prime Minister’s Office, December 2024. ↩
-
‘Generative artificial intelligence (AI) in education’, Department for Education, June 2025. ↩
-
‘AI opportunities action plan’, Department for Business, Innovation and Technology, January 2025. ↩
-
‘Generative artificial intelligence in education: call for evidence: summary of responses’, Department for Education, November 2023; J Felix and L Webb, ‘Use of artificial intelligence in education delivery and assessment’, UK Parliament, January 2024; MY Mustafa, A Tlili, G Lampropoulos, R Huang, P Jandrić, and others, ‘A systematic review of literature reviews on artificial intelligence in education (AIED): a roadmap to a future research agenda’, in ‘Smart Learning Environments’, Volume 11, Article 59, 2024. ↩
-
‘School and college voice: omnibus surveys for 2024 to 2025’, Department for Education, May 2025. ↩
-
‘School and college voice: omnibus surveys for 2024 to 2025’, Department for Education, May 2025.
J Felix and L Webb, ‘Use of artificial intelligence in education delivery and assessment’, UK Parliament, 2024; AD Samala, X Zhai, K Aoki, L Bojic and S Zikic, ‘An in-depth review of ChatGPT’s pros and cons for learning and teaching in education’, in ‘International Journal of Interactive Mobile Technologies (IJIM)’, Volume 18, Issue 02, 2024, pages 96 to 117; MY Mustafa, A Tlili, G Lampropoulos, R Huang, P Jandrić, and others, ‘A systematic review of literature reviews on artificial intelligence in education (AIED): a roadmap to a future research agenda’, in ‘Smart Learning Environments’, Volume 11, Article 59 1, 2024. ↩ -
J Felix and L Webb, ‘Use of artificial intelligence in education delivery and assessment’, UK Parliament, 2024. ↩
-
M Cukurova, ‘The interplay of learning, analytics and artificial intelligence in education: a vision for hybrid intelligence’, in ‘British Journal of Educational Technology’, Volume 56, Issue 2, 2025, pages 469 to 488.
M Montenegro-Rueda, J Fernández-Cerero, JM Fernández-Batanero and E López-Meneses, ‘Impact of the implementation of ChatGPT in education: a systematic review’, in ‘Computers’, Volume 12, Issue 8, 2023, pages 153; GS Semwaiko, W-H Chao and C-Y Yang, ‘Transforming k-12 education: a systematic review of AI integration’, in ‘International Journal of Educational Technology and Learning’, Volume 17, Issue 2, 2024, pages 43 to 63. ↩ -
MY Mustafa, A Tlili, G Lampropoulos, R Huang, P Jandrić, and others, ‘A systematic review of literature reviews on artificial intelligence in education (AIED): a roadmap to a future research agenda’, in ‘Smart Learning Environments’, Volume 11, Article 59 1, 2024.
W Holmes and I Tuomi, ‘State of the art and practice in AI in education’, in ‘European Journal of Education’, Volume 57, Issue 4, 2022, pages 531 to 691. ↩ -
MY Mustafa, A Tlili, G Lampropoulos, R Huang, P Jandrić, and others, ‘A systematic review of literature reviews on artificial intelligence in education (AIED): a roadmap to a future research agenda’, in ‘Smart Learning Environments’, Volume 11, Article 59 1, 2024. ↩
-
J Felix and L Webb, ‘Use of artificial intelligence in education delivery and assessment’, UK Parliament, 2024. ↩
-
‘Early adopter’, Cambridge English Dictionary, accessed 4 June 2025. ↩
-
See Appendix A for further details about the leaders we interviewed. ↩
-
Education technology (EdTech) refers to technology used to support teaching and the day-to-day management of education providers; see: ‘Generative artificial intelligence (AI) in education’. ↩
-
Multi-academy trusts are groups of 2 or more schools in England that work together as a single organisation. They are funded directly by the DfE and are independent of local authority control. ↩
-
A chatbot is a computer program that simulates human conversation with an end user. They frequently use conversational AI techniques such as natural language processing (NLP) to understand user questions and automate responses to them. ↩
-
J Felix and L Webb, ‘Use of artificial intelligence in education delivery and assessment’, UK Parliament, 2024; ‘Generative artificial intelligence (AI) in education’, Department for Education, June 2025. ↩
-
VR Lee, D Pope, S Miles and RC Zárate, ‘Cheating in the age of generative ai: a high school survey study of cheating behaviours before and after the release of ChatGPT’, in ‘Computers and Education: Artificial Intelligence’, Volume 7, 2024, pages 100253; ‘Cyber security risks to artificial intelligence’, Department for Science, Innovation and Technology, May 2024. ↩
-
‘Generative artificial intelligence (AI) in education’, Department for Education, June 2025. ↩
-
‘Teachers to get more trustworthy AI tech, helping them mark homework and save time’, Department for Science, Innovation and Technology and Department for Education, August 2024. ↩
-
‘The rise of AI in education 2024’, Bett, accessed June 2025.
‘School and college voice: omnibus surveys for 2024 to 2025’, Department for Education, May 2025 ↩ -
‘Children, young people and teachers’ use of generative AI to support literacy in 2024’, National Literary Trust, June 2024. ↩
-
For example, in 1952 Arthur Samuel developed the first computer draughts-playing program and the first computer program to learn on its own; see: ‘A Very Short History Of Artificial Intelligence (AI)’, Forbes, April 2022. ↩
-
O Zawacki-Richter, V Marin, M Bond and F Gouverneur, ‘Systematic review of research on artificial intelligence applications in higher education – where are the educators?’, in ‘International Journal of Educational Technology in Higher Education’, Volume 16, Issue 39, 2019, pages 1 to 27. ↩
-
D Gajjar, ‘Artificial intelligence (AI) glossary’, UK Parliament, January 2024. ↩
-
‘Data science and AI glossary’, The Alan Turing Institute, accessed June 2025. ↩
-
‘What are AI hallucinations?’, International Business Machines (IBM) Corporation, September 2023. ↩
-
M Montenegro-Rueda, J Fernández-Cerero, JM Fernández-Batanero and E López-Meneses, ‘Impact of the implementation of ChatGPT in education: a systematic review’, in ‘Computers’, Volume 12, Issue 8, 2023.
K Vanlehn, C Lynch, K Schulze, J Shapiro, R Shelby, and others, ‘The Andes physics tutoring system: five years of evaluations.’, Volume 15, Issue 3, 2005, pages 147 to 204. ↩ -
‘Generative AI in education call for evidence: summary of responses’, Department for Education, November 2023. ↩
-
‘ChatGPT in lesson preparation - teacher choices trial’, Education Endowment Foundation, June 2025. ↩
-
R Luckin, W Holmes, M Griffiths and LB Forcier, ‘Intelligence unleashed: an argument for AI in education’, Pearson, 2016. ↩
-
GS Semwaiko, W-H Chao and C-Y Yang, ‘Transforming k-12 education: a systematic review of AI integration’, in ‘International Journal of Educational Technology and Learning’, Volume 17, Issue 2, 2024, pages 43 to 63; C Dimla, M Sumaway, JM Torres and CA Dela Cruz, ‘The role of artificial intelligence in personalized learning: enhancing student engagement and academic performance’, in ‘International Journal of Research Publication and Reviews’, Volume 5, Issue 5, 2024, pages 8,495 to 8,505. ↩
-
W Ma, OO Adesope, JC Nesbit and Q Liu, ‘Intelligent tutoring systems and learning outcomes: a meta-analysis’, in ‘Journal of Educational Psychology’, Volume 106, Issue 4, 2014, pages 901 to 918. ↩
-
GS Semwaiko, W-H Chao and C-Y Yang, ‘Transforming k-12 education: a systematic review of AI integration’, in ‘International Journal of Educational Technology and Learning’, Volume 17, Issue 2, 2024, pages 43 to 63; M Montenegro-Rueda, J Fernández-Cerero, JM Fernández-Batanero and E López-Meneses, ‘Impact of the implementation of ChatGPT in education: a systematic review’, in ‘Computers’, Volume 12, Issue 8, 2023; K-S Tang, G Cooper, N Rappa, M Cooper, C Sims, and others, ‘A dialogic approach to transform teaching, learning & assessment with generative AI in secondary education: a proof of concept’, in ‘Pedagogies: An International Journal’, Volume 19, Issue 3, 2024, pages 493 to 503. ↩
-
K-S Tang, G Cooper, N Rappa, M Cooper, C Sims, and others, ‘A dialogic approach to transform teaching, learning & assessment with generative AI in secondary education: a proof of concept’, in ‘Pedagogies: An International Journal’, Volume 19, Issue 3, 2024, pages 493 to 503. ↩
-
MY Mustafa, A Tlili, G Lampropoulos, R Huang, P Jandrić, and others, ‘A systematic review of literature reviews on artificial intelligence in education (AIED): a roadmap to a future research agenda’, in ‘Smart Learning Environments’, Volume 11, Article 59. ↩
-
‘Guidance for generative AI in education and research’, UNESCO Digital Library, 2023. ↩
-
MY Mustafa, A Tlili, G Lampropoulos, R Huang, P Jandrić, and others, ‘A systematic review of literature reviews on artificial intelligence in education (AIED): a roadmap to a future research agenda’, in ‘Smart Learning Environments’, Volume 11, Article 59, 2024. ↩
-
‘Guidance for generative AI in education and research’, UNESCO Digital Library, 2023. ↩
-
M Bond, Khosravi, De Laat, Bergdahl, Negrea, and others, ‘A meta systematic review of artificial intelligence in higher education: a call for increased ethics, collaboration, and rigour’, in ‘International Journal of Educational Technology in Higher Education’, Volume 21, Issue 4, 2024, pages 1 to 41; M Montenegro-Rueda, J Fernández-Cerero, JM Fernández-Batanero and E López-Meneses, ‘Impact of the implementation of ChatGPT in education: a systematic review’, in ‘Computers’, Volume 12, Issue 8, 2023; ‘Generative AI in education call for evidence: summary of responses’, Department for Education, November 2023; ‘School and college voice: omnibus surveys for 2024 to 2025’, Department for Education, May 2025. ↩
-
MY Mustafa, A Tlili, G Lampropoulos, R Huang, P Jandrić, and others, ‘A systematic review of literature reviews on artificial intelligence in education (AIED): a roadmap to a future research agenda’, in ‘Smart Learning Environments’, Volume 11, Article 59.
U León-Domínguez, ‘Potential cognitive risks of generative transformer-based AI chatbots on higher order executive functions’, in ‘Neuropsychology’, Volume 38, Issue 4, 2024, pages 293 to 308.
Chunpeng Zhai, Santoso Wibowo and Lily D Li, ‘The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review’, in ‘Smart Learning Environments’, Volume 11, Article 28 ↩ -
B Dong, J Bai, T Xu and Y Zhou, ‘Large language models in education: a systematic review’, in ‘2024 6th International Conference on Computer Science and Technologies in Education (CSTE)’, April 2024. ↩
-
L Messeri and MJ Crockett, ‘Artificial intelligence and illusions of understanding in scientific research’, in ‘Nature’, Volume 627, Issue 8002, 2024, pages 49 to 58. ↩
-
‘Generative AI in education call for evidence: summary of responses’, Department for Education, November 2023. ↩
-
‘EDUCAUSE AI landscape study’, EDUCAUSE, February 2024. ↩
-
B Chen, J Cheng, C Wang and V Leung, ‘Pedagogical biases in AI-powered educational tools: the case of lesson plan generators’, OSF, April 2025. ↩
-
N Selwyn, ‘On the limits of artificial intelligence (AI) in education’, in ‘Nordisk Tidsskrift for Pedagogikk Og Kritikk’, Volume 10, Issue 1, 2024. ↩
-
S Patel and D Thurston, ‘Ethics and AI’, Leadership Magazine, December 2024. ↩
-
MY Mustafa, A Tlili, G Lampropoulos, R Huang, P Jandrić, and others, ‘A systematic review of literature reviews on artificial intelligence in education (AIED): a roadmap to a future research agenda’, in ‘Smart Learning Environments’, Volume 11, Article 59, 2024. ↩
-
‘Guidance on AI and data protection’, ICO, January 2025. ↩
-
AD Samala, X Zhai, K Aoki, L Bojic and S Zikic, ‘An in-depth review of ChatGPT’s pros and cons for learning and teaching in education’, in ‘International Journal of Interactive Mobile Technologies (IJIM)’, Volume 18, Issue 02, 2024, pages 96 to 117; J Felix and L Webb, ‘Use of artificial intelligence in education delivery and assessment’, UK Parliament, 2024. ↩
-
M Bond, Khosravi, De Laat, Bergdahl, Negrea, and others, ‘A meta systematic review of artificial intelligence in higher education: a call for increased ethics, collaboration, and rigour’, in ‘International Journal of Educational Technology in Higher Education’, Volume 21, Issue 4, 2024, pages 1 to 41; F Martin, M Zhuang and D Schaefer, ‘Systematic review of research on artificial intelligence in k-12 education (2017–2022)’, in ‘Computers and Education: Artificial Intelligence’, Volume 6, 2024. ↩
-
‘Generative artificial intelligence (AI) in education’, Department for Education, June 2025. ↩
-
‘Research on parent and pupil attitudes towards the use of AI in education’, Department for Education and Responsible Technology Unit, August 2024. ↩
-
‘Generative AI in education call for evidence: summary of responses’, Department for Education, November 2023; AD Samala, X Zhai, K Aoki, L Bojic and S Zikic, ‘An in-depth review of ChatGPT’s pros and cons for learning and teaching in education’, in ‘International Journal of Interactive Mobile Technologies (IJIM)’, Volume 18, Issue 02, 2024, pages 96 to 117. ↩
-
‘Technology in schools survey report: 2022 to 2023’, Department for Education, November 2023. ↩
-
‘School and college voice: omnibus surveys for 2024 to 2025’, Department for Education, May 2025. ↩
-
R Luckin, M Cukurova, C Kent and B Du Boulay, ‘Empowering educators to be AI-ready’, in ‘Computers and Education: Artificial Intelligence’, Volume 3, 2022. ↩
-
T Nazaretsky, P Mejia-Domenzain, V Swamy, J Frej and T Käser, ‘The critical role of trust in adopting AI-powered educational technology for learning: an instrument for measuring student perceptions’, in ‘Computers and Education: Artificial Intelligence’, Volume 8, 2025. ↩
-
T Nazaretsky, M Ariely, M Cukurova and G Alexandron, ‘Teachers’ trust in AI‐powered educational technology and a professional development program to improve it’, in ‘British Journal of Educational Technology’, Volume 53, Issue 4, 2022, pages 914 to 931. ↩
-
M Cukurova, R Luckin and C Kent, ‘Impact of an artificial intelligence research frame on the perceived credibility of educational research evidence’, in ‘International Journal of Artificial Intelligence in Education’, Volume 30, Issue 2, 2020, pages 205 to 235. ↩
-
C Vidal-Hall, R Flewitt and D Wyse, ‘Early childhood practitioner beliefs about digital media: integrating technology into a child-centred classroom environment’, in ‘European Early Childhood Education Research Journal’, Volume 28, Issue 2, 2020, pages 167 to 181. ↩
-
‘Knowledge for 2030’, Organisation for Economic Co-operation and Development; F Martin, M Zhuang and D Schaefer, ‘Systematic review of research on artificial intelligence in k-12 education (2017–2022)’, in ‘Computers and Education: Artificial Intelligence’, Volume 6, 2024. ↩
-
S Kelly, S-A Kaye and O Oviedo-Trespalacios, ‘What factors contribute to the acceptance of artificial intelligence? a systematic review’, in ‘Telematics and Informatics’, Volume 77, 2023; L Yan, L Sha, L Zhao, Y Li, R Martinez-Maldonado, and others, ‘Practical and ethical challenges of large language models in education: a systematic scoping review’, in ‘British Journal of Educational Technology’, Volume 55, Issue 1, 2024, pages 90 to 112. ↩
-
T Nazaretsky, M Ariely, M Cukurova and G Alexandron, ‘Teachers’ trust in AI‐powered educational technology and a professional development program to improve it’, in ‘British Journal of Educational Technology’, Volume 53, Issue 4, 2022, pages 914 to 931. ↩
-
N Selwyn, ‘Should robots replace teachers? AI and the future of education’, Polity Press, November 2019. ↩
-
RR Puentedura, ‘Transformation, technology, and education’, Hippasus, August 2006. ↩
-
‘School and college voice: omnibus surveys for 2024 to 2025’, Department for Education, May 2025. ↩
-
‘Generative artificial intelligence (AI) in education’, Department for Education, June 2025. ↩
-
‘Implementing the UK’s AI regulatory principles: initial guidance for regulators’, Department for Science, Innovation and Technology, February 2024. ↩
-
A Microsoft College, specifically a ‘Showcase College’, is an educational institution that has been recognised by Microsoft for its innovative and effective use of technology to enhance teaching, learning and collaboration. ↩
-
Prompt engineering is the process of carefully crafting instructions (called prompts) for generative AI models to guide them towards producing desired outputs. It’s about providing the right context, instructions and examples to help the AI understand the user’s intent and generate meaningful responses. ↩
-
‘Generative artificial intelligence (AI) in education’, Department for Education, June 2025. ↩
-
B Karan and GR Angadi, ‘Potential risks of artificial intelligence integration into school education: a systematic review’, in ‘Bulletin of Science, Technology & Society’, Volume 43, Issue 3/4, 2023, pages 67 to 85. ↩
-
R Luckin, M Cukurova, C Kent and B Du Boulay, ‘Empowering educators to be AI-ready’, in ‘Computers and Education: Artificial Intelligence’, Volume 3, 2022. ↩
-
MY Mustafa, A Tlili, G Lampropoulos, R Huang, P Jandrić, and others, ‘A systematic review of literature reviews on artificial intelligence in education (AIED): a roadmap to a future research agenda’, in ‘Smart Learning Environments’, Volume 11, Article 59. ↩
-
‘Research review series: history’, Ofsted, July 2021. ↩
-
Deepfakes are manipulated digital media, often in the form of images or videos. They are created using AI to convincingly resemble someone else or to create fake content. ↩
-
Metacognition involves understanding how you learn, what strategies you use, and how well you understand what you are learning. ↩
-
CK Blackwell, AR Lauricella and E Wartella, ‘Factors influencing digital technology use in early childhood education’, in ‘Computers & Education’, Volume 77, 2014, pages 82 to 90. ↩
-
R Hermans, J Tondeur, J van Braak and M Valcke, ‘The impact of primary school teachers’ educational beliefs on the classroom use of computers’, in ‘Computers & Education’, Volume 51, Issue 4, 2008, pages 1,499 to 1,509.
N Selwyn, ‘Preface’, in ‘Education and technology: key issues and debates’, 2nd edition, Continuum Books, 2011; N Selwyn, ‘Minding our language: why education and technology is full of bullshit… and what might be done about it’, in ‘Learning, Media and Technology’, Volume 41, Issue 3, 2016, pages 437 to 443. ↩ -
‘How we carry out ethical research’, Ofsted, February 2025. ↩