Guidance

Introduction to AI assurance

An introductory guide for practitioners interested in finding out how assurance techniques can support the development of responsible AI.

Documents

Introduction to AI assurance

Request an accessible format.
If you use assistive technology (such as a screen reader) and need a version of this document in a more accessible format, please email alt.formats@dsit.gov.uk. Please tell us what format you need. It will help us if you say what assistive technology you use.

Details

This guide aims to support organisations to better understand how AI assurance techniques can be used to ensure the safe and responsible development and deployment of AI systems. It introduces key AI assurance concepts and terms and situates them within the wider AI governance landscape.  

The introduction supports the UK’s March 2023 white paper, A pro-innovation approach to AI regulation that outlines five cross-cutting regulatory principles underpinning AI regulation, and the subsequent consultation response to bring the principles into practice. As AI becomes increasingly prevalent across all sectors of the economy, it is essential that we ensure it is well governed. AI governance refers to a range of mechanisms including laws, regulations, policies, institutions, and norms that can all be used to outline processes for making decisions about AI.  

This guidance aims to provide an accessible introduction to both assurance mechanisms and global technical standards, to help industry and regulators better understand how to build and deploy responsible AI systems. The guidance will be regularly updated to reflect feedback from stakeholders, the changing regulatory environment and emerging global best practices. 

Next steps 

For more information on AI assurance and how it can be applied to your own organisation, you can contact the AI assurance team: ai-assurance@dsit.gov.uk.

Updates to this page

Published 12 February 2024

Sign up for emails or print this page