Guidance for using the AI Management Essentials tool: government response
Updated 6 February 2026
Executive summary
As UK businesses increasingly adopt AI technologies, ensuring these systems are trustworthy and work as intended has become more critical than ever. In recent years, new standards and frameworks have been developed to help organisations effectively govern AI systems. While these offer important guidance, the Department for Science, Innovation and Technology’s (DSIT) engagement with industry suggests that many organisations - particularly Small and Medium Enterprises (SMEs) - find it challenging to navigate this landscape and engage with the complexity of these resources.
To address this, DSIT developed the AI Management Essentials (AIME) tool. AIME sought to distil key tenets from existing AI governance frameworks into an accessible self-assessment tool, helping businesses to assess and improve their AI governance and management practices. The proposed tool was published for consultation in November 2024. Feedback received through this process provided valuable insights to help refine the government’s approach. In particular, the feedback highlighted that approaches to AI management could vary significantly depending on an organisation’s size and role. By taking a size and occupation agnostic approach, AIME struggled to meet the diverse needs of different users and many respondents suggested tailoring the content to specific pathways would allow the guidance to be as accessible and relevant to businesses as possible.
Based on this feedback and recognising the value of existing adoption support (e.g. Bridge AI Adoption Framework[^1], we plan to refine our approach, and develop future guidance focused solely on the foundational governance measures needed to support responsible AI deployment and to specifically target SMEs.
This document provides an overview of the responses to the consultation and key themes that emerged, as well as setting out the government’s response to the feedback. The consultation questions have been grouped into 4 sections, as follows:
- General impressions: There was overall support for the aims and design of the tool. Respondents generally viewed AIME as a valuable and timely initiative, recognising its potential to support organisations in navigating the complexities of AI governance.
- Structure and format: The clear and structured approach was praised for making the tool accessible to those at the start of their journey. Respondents highlighted the potential to further enhance the structure by developing multiple pathways to complete AIME depending on the size and role of an organisation.
- Assessment questions: Respondents appreciated the strong focus on foundational AI governance, recognising its value in helping organisations improve internal processes. They suggested simplifying technical language, providing additional guidance and expanding the scope to make AIME even more accessible and impactful.
- Proportionality and procurement: There was general agreement that embedding AIME in government procurement frameworks could create a strong incentive to adopt good AI practices, however this would require careful implementation and support to avoid a potential burden on SMEs and adverse effect on competition.
Background
AI Management Essentials (AIME) was a self-assessment tool developed by government to help businesses establish robust governance and management practices for the development and use of AI systems. AIME distilled key principles from existing AI governance frameworks to provide an accessible resource for organisations to assess and improve their AI governance and management practices. The tool was not designed to evaluate AI products or services themselves, but rather to evaluate the organisational processes that are in place to enable their responsible development and use.
AIME was designed to be used by any organisation that develops, provides or uses services that utilise AI systems as part of its standard business operations. It was sector agnostic and intended to be used by organisations of different sizes, including small to medium-sized enterprises (SMEs) and start-ups who often face particular challenges when trying to navigate the evolving AI governance landscape.
The tool was developed to provide clarity on what is needed to ensure good AI governance and management. It aimed to help organisations identify the strengths and weaknesses of their existing processes and understand how to implement baseline good practices.
In November 2024, we ran a consultation inviting feedback on AIME. This consultation aimed to help DSIT ensure that the tool was fit for purpose and would support businesses of different sizes and sectors to implement robust AI governance practices. We invited feedback from any interested party, particularly from representatives of start-ups and SMEs who develop or use AI systems.
Methodology
The consultation was open from 6 November to 29 January 2025. The survey was open to the public and responses were received from individuals and organisations. Respondents were invited to participate via an online survey or to submit responses by email.
The consultation asked respondents 16 questions on the draft AI Management Essentials tool. Respondents did not have to answer every question.
65 responses were included in the analysis. This was made up of 61 online responses and 4 email responses. For open response questions, every response was reviewed, and while not every point that was made by each respondent can be reflected, responses were coded to identify common themes.
A Privacy Notice was provided containing information for participants on their rights and how their responses will be used. All personally identifiable information has been removed from the analysis.
Key themes and government response
Summary of responses
General impressions
6. What are your general impressions of the AIME tool?
The vast majority of respondents regarded the AIME tool as a valuable initiative. They appreciated its clear and comprehensive approach, which they felt provided a strong foundation for organisations beginning their AI governance journey. The structured, user-friendly design was highlighted as particularly beneficial, especially for SMEs navigating the complex landscape of AI governance.
While many felt the tool was appropriately pitched, some raised concerns about the technical language and the suitability of certain questions for non-expert users, noting that smaller organisations may lack extensive AI expertise Respondents also emphasised the importance of the tool not only prompting users to think through questions, but also offering concrete, actionable recommendations for improvement. They suggested accompanying resources, such as exemplar templates and clear guidance, could support organisations in implementing good governance practices most effectively.
Additionally, some respondents recommended expanding the scope of the tool to cover compliance with existing regulatory frameworks and UK-specific legal requirements, suggesting this would enhance clarity in a complex space and provide an accessible resource for organisations to ensure broader compliance.
Government response
Feedback from the consultation highlights the value organisations see in AIME as a starting point for AI governance, with the majority of respondents expressing positive general impressions of the tool.
DSIT recognises the concerns raised about the tool’s potential complexity, particularly for smaller organisations, and will look to take steps to ensure our support is as straightforward and accessible as possible. We will aim to focus future guidance on core foundational AI governance practices and clearly tailor this to different audiences within an organisation. We will also look to ensure the terminology used throughout better reflects the language used by those without specialised expertise.
DSIT acknowledges that the scope of AIME was too broad and will work on ensuring there are other resources available to support organisations at different stages of AI adoption or development. For example, exploring the development of AI Adoption Hubs and the existing Bridge AI funding and support opportunities. Directing organisations to the appropriate tailored resources, rather than taking an overarching one-size-fits all approach should enable government to better address the diverse needs and varying levels of AI expertise across UK organisations.
DSIT agrees with the feedback highlighting the value of additional guidance and actionable recommendations to support organisations in implementing effective AI governance practices.
As set out in the guidance consulted upon, a set of action points and improvement recommendations was always intended to accompany AIME upon publication. These were not included in the consultation, which focused solely on the self-assessment questionnaire. DSIT will explore developing supplementary resources, including templates and online tools, to support future guidance, to address this need.
Finally, some responses asked us to explicitly link the tool to legal standards and existing frameworks. Given the breadth of AI regulation and standards applicable to AI, and the rate at which these are updated and evolve, DSIT does not think it is appropriate to link to specific legislation or standards. However, we acknowledge there may be value in exploring opportunities to develop sector-specific guidance to accompany any future AI governance guidance.
Structure and format
7. Does the overall structure of the tool make sense? Why/why not?
The vast majority of respondents found the overall structure of the AIME tool sensible. Several respondents praised the logical flow of the tool, noting that it guides users through a sensible sequence of steps and addresses essential compliance areas systematically, ensuring a comprehensive assessment.
A number of responses also suggested that AIME would benefit from multiple pathways for organisations to complete, depending on the size of the organisation and how they interact with AI tools. They felt this would enable organisations to focus on their most critical areas and allocate resources effectively.
8. Would you change the order of any of the sections/questions? If yes, which questions and why?
While respondents felt the existing order of questions was generally sound, changes were suggested to improve the flow and logic of the tool, primarily to better align with practical implementation. Suggestions focused on the flow of the “Data, Risk and Impact” sections. Some respondents also recommended grouping the “Fairness” and “Bias Mitigation” sections together to create a more cohesive flow.
9. We are planning to format the final version of the tool as an interactive decision tree (loosely based on the Cyber Essentials readiness tool). Do you agree that this format is intuitive/easy to use? Why/why not?
An overwhelming majority of respondents agreed that formatting the AIME tool as an interactive decision tree would be intuitive and user-friendly. The decision tree format was seen as simplifying navigation and providing a logical progression through the assessment, supporting users to focus on one step at a time without feeling overwhelmed. Respondents felt this would be particularly helpful for SMEs with limited AI expertise.
Respondents also noted an interactive decision tree could effectively support a multiple pathways approach and enable the tool to provide tailored questions and guidance, which could better serve both SMEs and larger organisations.
Government response
The consultation responses indicate broad support for the structure and format of AIME, including the proposal for an interactive decision-tree. DSIT recognises the need for an intuitive and user-friendly experience.
The suggestion to provide tailored pathways based on an organisation’s size and interaction with AI systems was a recurring theme in the consultation responses. DSIT will consider this when designing the scope and structure of future guidance to enable the responsible individuals within organisations to focus on the most relevant areas of the guidance and allocate resources effectively.
Assessment questions
10. Are there any questions that you think are difficult to answer? If yes, what are they? Why are they difficult to answer?
Feedback from respondents highlighted areas where AIME could be made more accessible and effective, particularly for non-specialist users and SMEs. The primary concerns were the use of technical terms and subjective questions, which could make it challenging for organisations without prior expertise to provide accurate or meaningful responses. Several respondents also felt certain questions implied disproportionate levels of governance for SMEs, noting that SMEs may struggle not only to answer such questions but also to implement the required technical measures.
Respondents recommended using simplified language, clear definitions, targeted case studies, and practical templates to make the tool more actionable and bridge the gap between aspiration and capability.
Finally, respondents noted the challenge of assessing third-party AI systems. Organisations using AI-as-a-Service or pre-trained models often lack visibility into vendors’ internal processes and training data, which can make it difficult to answer questions related to data provenance, bias and due diligence. Respondents suggested that the tool should offer alternative pathways for organisations in these circumstances.
11. Are there any questions that you think are superfluous/unnecessary? If yes, what are they? Why are they superfluous/not needed?
Most respondents indicated that, overall, they did not find any questions to be entirely unnecessary. Some respondents believed that certain data protection questions were too generic, as these practices should already be covered under UK GDPR. However, others felt that more GDPR requirements should be included. A few respondents also questioned the value of asking about the frequency of AI policy and system record reviews. Their reasoning was that the quality of the review is more important than the specific timetable for review and that this will vary depending on the size of the organisation.
Finally, some respondents again noted the potential limitations of a one-size-fits-all approach, as some questions are only applicable to organisations if they developed, or had a third-party develop, the systems. They suggested clarifying which questions were applicable in different scenarios or building multiple pathways dependent on the role of the organisation answering.
12. Are there any questions that you particularly liked or would find helpful for improving your internal processes? If yes, what are they? Why are they helpful/appealing?
Respondents generally found several questions helpful for improving internal processes, highlighting the tool’s value in encouraging reflection and helping organisations identify gaps and areas for improvement. Specific questions highlighted typically focused on governance, risk management, and communication. Several respondents liked that the questions focused on foundational aspects of AI governance that are often overlooked, as they felt this would support organisations to establish robust processes from the ground up and lead to tangible outcomes.
13. Are there any necessary conditions/statements/processes that are missing that organisations should be implementing? What are they?
Respondents identified some areas where the AIME tool could be strengthened by incorporating additional topics. This included adding a more structured approach to the ongoing monitoring and evaluation of AI systems after deployment. Respondents also recommended that the tool include questions addressing the development of relevant AI skills within organisations, focusing on identifying knowledge gaps and establishing continuous training programmes for employees.
In addition, multiple respondents felt that the tool should have a stronger focus on ethical considerations, especially in relation to fairness and transparency. Finally, they reiterated the suggestion that AIME should align more closely with broader legal standards, international frameworks, and sector-specific regulations.
Government response
The consultation responses affirmed the value and relevance of the assessment questions in the AIME tool. The government agrees that simplifying technical terminology, providing clear definitions, and incorporating targeted case studies could further enhance accessibility of the questions, particularly for non-expert users and SMEs.
Furthermore, DSIT acknowledges the feedback that additional clarification is needed on how to complete AIME depending on how an organisation interacts with AI tools. DSIT will consider what future guidance may be helpful to ensure organisations are clear on the purpose and target audience of the guidance and can look at alternative sources of support if they are at different stages of their journey.
DSIT also acknowledges the insights from the consultation on the need for more extensive guidance on monitoring, evaluation, and transparency. We will consider including additional technical guidance on how to effectively identify, mitigate, and monitor risks within AI systems in future guidance products.
Proportionality and procurement
14. Is the tool overly burdensome or unrealistic for the target audience, (i.e., organisations with limited resources to engage with AI governance frameworks, for example, start-ups and SMEs)
There were mixed views on whether the AIME tool is overly burdensome for SMEs and start-ups. While some respondents felt the tool was accessible and straightforward, others raised concerns about the resource constraints smaller organisations may face in using the tool and implementing the governance practices required to score highly. It was noted that SMEs and start-ups often lack the time, funding, and personnel to dedicate to AI governance, and that some of the tool’s technical questions may require specialised knowledge not typically available within smaller teams.
Despite these challenges, a number of respondents suggested these risks could be mitigated through the provision of supplementary resources, such as explanatory notes, examples, and training materials to help users navigate the tool more effectively. In addition, respondents once again emphasised the potential benefits of developing multiple pathways within the tool, enabling SMEs to begin with basic practices and gradually adopt more advanced measures as their capabilities grow.
15. We are exploring the possibility of embedding AIME in government procurement frameworks. In this model, organisations supplying government with AI products and services would be required to complete the tool and demonstrate baseline responsible AI management system processes. Do you agree that this would incentivise organisations to implement responsible AI management systems?
The overwhelming majority of respondents agreed that embedding AIME in government procurement frameworks would create a strong incentive to adopt good AI practices, foster accountability and raise standards across the industry.
However, there were some caveats and concerns raised, mainly regarding the potential burden on SMEs and the need for effective implementation. Respondents also noted that there would need to be some sort of audit mechanism to assure the validity of responses, given the proposed self-assessment model of AIME.
16. Do you believe that embedding AIME in government procurement processes would have an adverse effect on competition (e.g., add disproportionate burden on SMEs, who are likely to have less resources/capacity to fill out a tool like this, compared to larger organisations)?
Most respondents acknowledged embedding AIME in government procurement processes could have an adverse effect on competition, primarily by adding a disproportionate burden on SMEs. Respondents noted the technical aspects of AI management can be complex, and SMEs may lack the resources and in-house expertise needed to navigate them effectively. They felt this could reduce competition by deterring SMEs from participating in government procurement opportunities or enabling larger companies to dominate procurement opportunities due to their ability to meet compliance requirements more efficiently.
However, many believed that this could be mitigated through careful implementation and support measures to help SMEs understand and complete AIME. Suggestions included introducing a tiered approach where AIME requirements are adapted based on appropriate thresholds, for example organisation size and/or overall value of procurement contract, as well as phased implementation to give SMEs more time to adapt and comply.
Government response
The government acknowledges the range of views on the proportionality of AIME, and the concerns raised regarding the potential burden on SMEs in adopting comprehensive AI governance practices. Any future guidance will be developed with careful consideration of the challenges many smaller firms face and with a refined focus on supporting SMEs to integrate AI management into their existing governance capabilities. We will also consider including examples and case studies to make it as accessible as possible.
DSIT will not be publishing AIME and therefore will not be making it a requirement of the government procurement process. However, the government appreciates the valuable feedback received on this proposal. The insights regarding the potential impact on competition and the need for a balanced approach will inform future policy considerations as the AI governance and public procurement landscapes continue to evolve.
Next steps
DSIT has considered the feedback on the AIME tool, recognising the value respondents see in a product to support them to navigate the complex landscape of AI governance. We have heard the key themes that have emerged from this feedback, particularly around the need to pitch guidance at the right level of expertise and the different approaches organisations will need to take to AI Governance depending on their size or role. This will inform any work we undertake on the development of refined guidance focused primarily on supporting SMEs deploying AI solutions to build strong foundational governance practices. We aim to make any future guidance more accessible for businesses beginning their AI adoption journeys and support the development of a broader culture of responsible AI governance in the UK.