Policy paper

Ofqual’s approach to regulating the use of artificial intelligence in the qualifications sector

Published 24 April 2024

Applies to England

Background

The Office of Qualifications and Examinations Regulation (Ofqual) is the independent regulator of qualifications and assessments for England. Ofqual regulates on behalf of students of all ages and apprentices to make sure that qualifications, apprenticeship end-point assessments and National Assessments are good quality. As set out in our corporate plan, Ofqual’s priorities are to:

  • secure quality and fairness for students and apprentices
  • ensure clarity, effectiveness and efficiency in the qualifications market
  • shape the future of assessment and qualifications
  • develop the organisation as an effective, expert regulator and inclusive employer

The emergence of widely accessible generative artificial intelligence (AI) in 2023 raised several new considerations for the regulation of qualifications and assessments. These include new opportunities for AI to support the design, development and delivery of high-quality assessment, alongside new risks around AI’s use in non-exam assessments. Ofqual promptly established the following 5 key objectives that have shaped our AI regulatory work. The annex includes details for how these connect with the overarching principles set out in the government’s A pro-innovation approach to AI regulation White Paper:

  • Ensuring fairness for students
  • Maintaining validity of qualifications
  • Protecting security
  • Maintaining public confidence
  • Enabling innovation

In this fast changing environment, Ofqual has taken significant steps to support safe delivery of current assessments and has worked alongside regulated awarding organisations to understand potential innovative uses. This has included being clear when AI does not comply with Ofqual’s General Conditions of Recognition (GCRs) – the rules regulated awarding organisations must follow – and clear when AI is inappropriate to use in a sector where results for students have a strong bearing on individual opportunities.

The design, development and delivery of qualifications requires deep specialist expertise, and regulated awarding organisations make use of sophisticated statistical modelling, data and analytics to support human decision-making. In some instances, such approaches can incorrectly be conflated with AI, which, as the AI White Paper defines, has the features of being ‘adaptive’ and ‘autonomous’. These definitions differentiate between AI tools that extend beyond a purpose for which they were initially designed (adaptive) and are enabled to make decisions that do not involve a human in the process (autonomous), and non-AI technical systems. An example of a non-AI technical system would be computer-based marking where marking matrices are clearly set by human experts and there is no potential for system adaptivity to move beyond these.

Ofqual’s approach to regulating AI

Ofqual’s priority is to ensure that where AI is used by awarding organisations, it is applied in a safe and appropriate way that does not threaten the fairness and standards of, or public confidence in, qualifications. To achieve this, Ofqual has adopted a precautionary principle to the use of AI that meaningfully guards against inappropriate use of the technology in the most critical processes – whether managing malpractice threats or in qualification design, development and delivery – while remaining open to new, compliant innovations.

The outcome of a qualification for students and apprentices can have a significant bearing on the opportunities they go on to pursue, whether they are students in schools and colleges, adults looking to change career, or individuals wishing to access work in specific industries. These outcomes are also relied on by employers, higher education institutions, and other qualifications users. A trustworthy qualifications system is an essential component of the wider economy. Given this, and the scale of the qualifications system in England, with approximately 11 million certifications annually, from some 240 awarding organisations, we have considered it important to prioritise stability and avoid the inappropriate use of untested technologies in the highest stakes assessment processes.

Ofqual has already issued clarifications around inappropriate uses of AI and convened awarding organisations to develop collective understanding. Ofqual has requested further information to be assured of mitigations awarding organisations have in place, and we expect to add to growing research in this area. These are among the tools Ofqual can use to ensure appropriate application. We plan to provide additional sector guidance and advice which will further secure safe and well-considered innovative use, though if we deem it appropriate, we may add, remove or amend our rules.

Managing malpractice risks

The widespread accessibility of generative AI brings new threats to some established forms of assessment. Supervised examinations are less susceptible to student misuse of the technology because existing rules prevent access to AI systems during the assessment. Similarly, practical assessments are less susceptible where the outputs are not easily replicable by AI. However, non-exam assessment (commonly referred to as ‘coursework’) – in particular where students produce a portfolio of evidence or generate content outside of examination conditions – is potentially placed under more pressure. Initial reporting about the scale of malpractice in non-exam assessment suggests only modest numbers of cases having been identified by awarding organisations, requiring further investigation and in some cases leading to sanctions against students.

Despite current low volumes of reported malpractice, changes in both the technology and growing familiarity among users introduce uncertainty about future impact. For this reason, Ofqual has taken steps to secure safe delivery over the short term while considering appropriate longer term interventions.

Ahead of the 2023 exam season, Ofqual supported production of guidance from the Joint Council for Qualifications (JCQ) to give clarity to schools and colleges about the role they play in securing the authenticity of students’ work. This guidance – since updated for 2024 – provides schools and colleges with recommendations for both secure delivery of assessments and approaches for detection of where AI may have been inappropriately used.

Alongside this, Ofqual has made clear to awarding organisations their obligations to assess and address the risks that AI may present to their assessments. Building on this, Ofqual has requested detailed information from all regulated awarding organisations about how they are managing AI-related malpractice risks as part of our annual Statement of Compliance activity. This has a focus on risk identification, systems and processes to identify and manage generative AI malpractice, and support issued to schools and colleges.

Awarding organisations have a key role to play in supporting centres as the potential uses of AI in assessment – and therefore what is defined as malpractice – evolves. In qualifications where nationally set content is defined by other bodies including the Department for Education (DfE), the Institute for Apprenticeships and Technical Education (IfATE) and other authorities that designate qualifications as a ‘licence to practice’, it is for those organisations to determine whether and how the use of AI forms a component of what is to be assessed. This may subsequently result in differences in whether and how AI can be used during assessment, and such decisions would need to be clearly communicated with schools, colleges and training providers by awarding organisations. This would also be relevant in qualifications that do not have nationally set content, where awarding organisations themselves amend or introduce qualifications that are designed to make use of AI in the assessment process.

Using AI to mark students’ work

Ofqual wrote to all awarding organisations in September 2023 to confirm that the use of AI as a sole marker of students’ work does not comply with our regulations. Ofqual reached this view partly because such use does not meet requirements for a human based judgement to be used in marking decisions. But it is also our view – by virtue of taking a precautionary principle – that the potential for bias, inaccuracies and a lack of transparency in how marks are awarded could introduce unfairness into the system. This would be unacceptable in the marking process. There are opportunities for AI to complement and quality assure human marking, though further information and evidence will be needed to be assured the use of AI as a sole marker is appropriate in such a high stakes process.

Using AI in remote invigilation

For similar reasons, Ofqual clarified with regulated awarding organisations in December 2023 that the use of AI as a sole form of remote invigilator for student work is unlikely to be compliant with our regulations. The importance of effective invigilation to ensure the authenticity of students’ assessment evidence, and prevent and detect malpractice or maladministration is currently best secured by human involvement. Ofqual will keep this position under review in light of further research and evidence.

Co-regulation and engagement

Ofqual has adopted a co-regulatory approach to our work on AI. This recognises the collective benefits of regulator and awarding organisations collaboratively understanding and controlling potential harms. As well as briefing sessions as part of routine regulatory forums, Ofqual’s direct engagement on AI with awarding organisations has included an in-person event with representatives from more than 20 of the largest awarding organisations with the greatest volumes of high stakes qualifications, and a webinar to which all awarding organisations were invited, with more than 190 attendees. Through these we have collectively identified some of the most viable AI use cases, establishing the risks and opportunities they present.

Ofqual is committed to supporting well-evidenced innovation, suitably balanced against the fundamental requirement that examinations are both accessible and fair to all students taking them. This is especially true where changes are introduced at scale in such a diverse and high stakes system. Ofqual launched an innovation service in late 2023 to support awarding organisations in navigating the way a well-developed idea that promotes valid and efficient assessment could interact with our regulatory requirements, and to identify the regulatory risks that may emerge. The service is open to novel concepts including use of AI, where awarding organisations may already have trialled new applications and seek a view ahead of its wider deployment.

In parallel with our awarding organisation engagement, Ofqual has liaised extensively with other regulators and across government. Our priority has been close collaboration with qualification regulation counterparts in Wales (Qualifications Wales) and Northern Ireland (CCEA Regulation). This is particularly important given the cross-jurisdictional operations of many awarding organisations, some of which also extend into overseas markets.

With differing regulatory positions potentially taking effect in different countries, awarding organisations may experience varying requirements around where and how they can use AI. To support visibility of such dynamics, Ofqual participates in the Alan Turing Institute’s “AI Standards Forum for UK Regulators” group which has an increasing international focus. This forms part of our wider collaboration with the Department for Education, other regulatory bodies within the education sector, the Department for Science, Innovation and Technology, the Institute of Regulation, and other UK regulators.

What comes next

Building on our work to date, Ofqual will retain a strong focus on robust analysis of new and changing use cases and threats posed by AI. This will reflect changes to the technology, appetite by awarding organisations to apply AI in their qualification design, development and delivery, new research and developing mitigations. To achieve this, Ofqual plans to continue:

  • Evaluating evidence about how awarding organisations handle AI related malpractice, alongside following up operational developments and incidents where necessary.

  • Using evidence to introduce further guidance, advice and/or clarifications. These will be introduced based on continuing use case assessments identifying where interventions are required to mitigate the harms that could come from inappropriate use of AI.

  • Research into AI marking to assess the abilities and performance of such platforms, furthering our understanding of the opportunities they present, and limitations that may need to be managed. This research could support appropriate use of AI by awarding organisations, for instance as part of the quality assurance of marking.

  • Research into perceptions about the use of AI in the design, delivery and marking of qualifications, and the implications of malpractice to understand the views among students, teachers, parents and the wider public on the role of AI in assessment. This will add to our stakeholder engagement to understand perspectives from representative organisations, schools, colleges and training providers, and other associations.

  • Dialogue with regulated awarding organisations building on events run during late 2023. We expect to continue close liaison to build collective knowledge about the risks and opportunities from AI, and support introduction of guidance and advice that supports awarding organisation activities.

  • Implementation of AI specific categories to be used by awarding organisations when reporting malpractice to Ofqual.

Annex - Key regulatory objectives

Ofqual’s priorities shaping its regulation of AI are set out below, including how they connect with the overarching White Paper principles:

  • Fairness for students
    • ensuring use of AI does not lead to unfair outcomes for students, loss of currency of their achievements, and/or lack of clarity over what constitutes malpractice
    • White Paper principles – transparency and explainability; fairness; accountability and governance; contestability and redress
  • Maintaining validity

    • recognising and managing potential threats to validity through varying applications of AI, and identifying and acting on activities more susceptible to being adversely affected by use of AI
    • White Paper principles – transparency and explainability; fairness; accountability and governance; contestability and redress
  • Protecting security

    • alertness to malpractice including assessing and addressing vulnerabilities to assessment security from AI
    • protection of student data and questions and paper security
    • White Paper principles – safety, security, robustness; accountability and governance
  • Maintaining public confidence

    • ensuring that public confidence around the use and effects of AI, and that steps are taken to maintain confidence
    • White Paper principles – transparency and explainability; fairness; accountability and governance

In tandem with these, it is a priority for Ofqual to enable new opportunities that may emerge from use of AI. These could be within design, development and delivery of qualifications, or ways to generate efficiencies in awarding organisation operations. Ofqual will not prevent such innovation where it is in the best interests of students.