Guidance

Artificial intelligence malpractice and assessment - advice note (accessible)

Published 27 April 2026

Applies to England

About this advice note

This advice note supports awarding organisations (AOs) in understanding how the existing Conditions of Recognition[footnote 1] and related Guidance apply to the risks of malpractice arising from Learners’ use of artificial intelligence (AI) tools. This advice note is relevant to all qualifications regulated by Ofqual.

It aims to help AOs:

  • consider where AI-related malpractice may pose a risk to their assessments
  • understand what taking “all reasonable steps” (under General Condition A8) might look like in practice in this context
  • review whether their current arrangements remain effective as AI tools continue to develop

Advice notes sit outside Ofqual’s statutory framework of Conditions and statutory Guidance and do not create new regulatory requirements. The advice in this note reflects existing expectations and is intended to help AOs consider how those requirements apply in the context of AI‑related malpractice. AOs may therefore find it helpful in recognising and responding to risks that could affect their ability to comply with their Conditions of Recognition.

Conditions in scope

This advice note relates to awarding organisations’ obligations under the following General Conditions of Recognition:

  • General Condition A6 – Identifying and monitoring risks
  • General Condition A8 – Preventing, investigating and managing malpractice and maladministration, including statutory Guidance on malpractice and maladministration

The full requirements are set out in the published General Conditions of Recognition on Ofqual’s website. This advice note does not restate those requirements but is intended to support AOs in considering how they apply in the context of AI-related malpractice.

References to Conditions in this advice note are for ease of understanding and do not replace the published General Conditions of Recognition.

Why AI creates risks to assessment validity

AI tools are increasingly accessible and capable of generating material, such as text, images, audio, video and code. In some circumstances, Learners may be able to use these tools to produce material that does not reflect their own knowledge, skills or understanding and submit it for assessment as their own work.

This creates potential risks to the validity of assessments and the reliability of results, particularly where acceptable use is unclear or difficult to monitor.

The extent to which AI-related malpractice poses a risk will vary between qualifications and assessments. Some assessments may be more vulnerable than others, depending on how they are designed, delivered and used.

AOs should consider the risk of AI‑related malpractice in the context of their own qualifications, taking account of the factors set out below. When doing so, AOs should also take account of whether changes made to reduce vulnerability to AI‑related malpractice could have unintended consequences for the construct being assessed or for assessment validity more broadly.

Features of the assessment

Some features of an assessment may make it more, or less, vulnerable to AI‑related malpractice. AOs should consider how the design of their assessments interacts with the capabilities of AI tools when evaluating risks to assessment validity.

Relevant factors may include, for example:

  • Who sets the task
    Where an assessment task is set by the AO, the AO may have greater control over its design, which in some cases may reduce vulnerability to AI‑generated responses.

  • How specific the task is
    Tasks that are highly specific to a particular context, data set or problem may be less susceptible to AI‑related malpractice, as AI tools may be less able to generate outputs that are relevant or appropriate. However, in some cases more specific tasks may increase the risk of other forms of malpractice, and this balance should therefore be carefully considered.

  • What output is assessed
    Some forms of assessment output are currently easier for AI tools to generate than others. Assessments that rely on outputs such as extended written responses, digital images, audio or code may therefore present different risks compared to assessments based on live performances, demonstrations or the production of physical artefacts.

  • Length and timing of the assessment
    Assessments completed over extended periods, or outside supervised conditions, may provide greater opportunity for inappropriate use of AI tools than assessments completed within a controlled environment. Shorter assessments undertaken under supervision may, in some cases, reduce exposure to AI‑related malpractice.

Delivery methods

The way an assessment is delivered may also affect its vulnerability to AI‑related malpractice. AOs should consider how delivery arrangements interact with Learners’ access to AI tools, and the extent to which those arrangements support valid assessment outcomes.

Relevant factors may include, for example:

  • Level of supervision
    Assessments completed under direct supervision may, in some cases, be less vulnerable to AI‑related malpractice where supervision is effective in preventing or identifying inappropriate use of AI tools.
  • Access to digital devices and connectivity
    Where Learners complete assessments using digital devices, particularly with internet access, there may be increased opportunity for use of AI tools. More controlled assessment environments, such as traditional paper‑based examinations, may reduce exposure to AI‑related malpractice.
  • Arrangements for delivery across centres
    AOs should consider the extent to which any controls they put in place to manage risks associated with AI use can be implemented consistently and effectively across centres. Differences in centre practices, resources or understanding of requirements may affect how those controls operate in practice, and how far delivery arrangements support valid assessment outcomes.

Context of the assessment

The wider context in which an assessment is taken may also influence the likelihood of AI‑related malpractice. AOs should consider how Learner incentives and norms around the use of technology, including AI, interact with the potential use of AI tools.

Relevant factors may include, for example:

  • The stakes attached to the assessment
    Assessments that contribute to high‑stakes qualifications, or that have a significant impact on progression or overall outcomes, may create stronger incentives for Learners to use AI inappropriately to improve their results.

    Conversely, where an assessment is perceived as very low stakes, Learners may be more likely to view the use of AI as acceptable, even where it is not permitted, which may also increase the risk of AI related malpractice.

  • The weighting of the assessment within the qualification
    Assessments with a relatively high weighting within a qualification, or which act as a mandatory pass requirement, may be more vulnerable to AI related malpractice because of the incentives they create for Learners. Where malpractice occurs in such assessments, it is also likely to represent a greater overall threat to the validity of the qualification.

  • Norms of technology use within the subject area
    Expectations about the acceptable use of technology, including AI, may vary between subjects and sectors. Where assessment rules differ significantly from normal ways of working, Learners may be more likely to misunderstand or challenge restrictions on AI use.

Taking reasonable steps to prevent and manage AI‑related malpractice

Where AOs identify that assessments are vulnerable to AI‑related malpractice, they must consider what reasonable steps may be needed to prevent malpractice from occurring and, where relevant, to manage its effects. What constitutes reasonable steps will vary depending on the qualification and assessment and should be proportionate to the risks identified.

In considering possible steps, AOs should take account of how far different approaches address the risks identified, and whether they could have unintended consequences for the construct being assessed or for assessment validity more broadly.

Changes to assessment design

In some cases, risks of AI‑related malpractice may be reduced through changes to the design of assessments, for example by adapting assessment tasks or formats to reduce their susceptibility to AI‑generated outputs.

Where AOs consider making changes to assessment design, they should ensure that:

  • the assessment continues to assess the intended construct effectively
  • changes do not introduce new risks to validity or fairness
  • the assessment continues to operate within the relevant Qualification Level and Subject Level Conditions, where applicable, as well as the General Conditions of Recognition

Not all risks can be addressed through design changes, and in some cases such changes may not be appropriate.

Preventative arrangements

AOs should consider preventative arrangements aimed at reducing the likelihood of AI‑related malpractice occurring. These may relate to delivery arrangements and to the information and guidance provided to centres and Learners.

Preventative arrangements may include, for example:

  • clear communication of expectations about acceptable and unacceptable uses of AI
  • guidance or requirements for centres and Learners about AI use, where relevant
  • appropriate supervision or controls on access to digital devices or connectivity
  • arrangements to support Learner understanding of authenticity requirements, such as declarations or statements of authenticity completed before assessments take place to confirm understanding

AOs should take account of how far these arrangements can be implemented consistently and effectively across centres, and how well they support valid assessment outcomes.

Detecting and investigating malpractice

AOs should consider arrangements for detecting and investigating suspected or alleged malpractice. This may include training for teachers, examiners and moderators, and the use of statistical or technological tools to help identify potential cases for investigation.

Such tools should be used as sources of evidence rather than as sole determinants. AOs should consider their effectiveness and limitations, including the risk of false positives or false negatives, and how outputs are interpreted. Where malpractice is suspected or established, AOs must also consider what steps are needed to manage any adverse effects, including preventing recurrence.

Where no effective mitigation is available

In some cases, AOs may conclude that there are no effective or proportionate measures available to adequately address the risk of AI‑related malpractice for a particular assessment. Where this is the case, AOs may need to consider more fundamental changes, including removing or replacing an assessment, where this is possible within regulatory requirements.

Wider review of risks

Changes made to address AI‑related malpractice may give rise to other risks, with implications that will vary depending on the assessment and qualification. AOs should consider whether mitigations introduced to reduce vulnerability to AI‑related malpractice could affect the construct being assessed, the effectiveness of assessment delivery, or other aspects of assessment validity.

For example, removing internet access may mean that Learners are no longer able to demonstrate the full range of research skills being assessed, because access to an important research tool has been removed.

Similarly, introducing more extensive checks to detect malpractice may create other risks. If moderators are overly focused on identifying AI‑related malpractice, this could reduce the time and attention available for their other moderation activities, potentially reducing their overall effectiveness.

Assessing the use of AI

In some qualifications, the use of AI tools may form part of the construct being assessed, and access to AI may therefore be appropriate or necessary. In such cases, AOs should consider how AI use is managed so that other aspects of the assessment, where AI use would be inappropriate, remain valid and secure.

This may include, for example:

  • setting clear parameters for how AI may be used within an assessment
  • requirements or guidance for centres and Learners on legitimate use of AI
  • arrangements for how Learners demonstrate or reference their use of AI, where relevant

Reviewing whether mitigations remain effective

AI tools continue to develop rapidly, as do patterns of Learner use. AOs should therefore keep their arrangements under review to ensure that measures put in place to address AI‑related malpractice remain effective and proportionate over time.

In reviewing mitigations, AOs may consider:

  • how far existing measures are reducing the risks of AI‑related malpractice identified
  • whether mitigations have had unintended consequences for the construct being assessed or for assessment validity more broadly
  • whether changes in the capabilities or availability of AI tools require adjustments to their approach

Where AI‑related malpractice or maladministration has occurred, AOs should use this information to identify any weaknesses in existing arrangements and consider whether further steps are needed to prevent recurrence.

  1. Throughout this document, words and capitalised terms have the same meaning as defined in the General Conditions of Recognition, including any relevant Qualification Level Conditions and Subject Level Conditions.