AI skills tools package
Published 29 October 2025
Applies to England
This section introduces the AI Skills tools package. The tools presented in this section were developed by Dr Nisreen Ameen. They draw on evidence collected from 6 national workshops, an expert roundtable at The British Academy, and supplementary desk research.
The AI skills tools package contains 3 resources:
- AI Skills Framework, which maps AI skills into: technical; responsible and ethical; and non-technical domains, aligned with three job levels (entry, mid and managerial levels).
- AI Skills Adoption Pathway Model, which sets out 9 stages of AI adoption in organisations, connecting to changing skill needs.
- Employer AI Adoption Checklist, which is a self-assessment tool for organisations to evaluate readiness to adopt AI, identify skills gaps and plan inclusive adoption strategies.
These tools can help employers, educators, and policymakers to assess AI skills, plan training, and support responsible adoption of AI across all sectors.
The AI skills framework: mapping AI skills by job level
Across the workshops, experts repeatedly emphasised the need for a more coherent and role-sensitive definition of AI skills. In response, this fellowship project has developed an AI skills framework.
AI skills refer to the competencies and abilities required to develop, implement, manage, and interact with AI systems effectively. The framework categorises AI skills into: technical; responsible and ethical; and non-technical domains, aligned with three job levels. The use cases annex shows how organisations can draw on the framework to plan, assess, or support workforce development.
This structure reflects Skills England’s emphasis on employer-informed, job-relevant, and inclusive skills development. The framework has been aligned with other national and international skills frameworks.
The language used has been deliberately aligned with the UK Standard Skills Classification (SSC), which was in prototype at the time of writing. It also aligns with other UK skills classifications and frameworks.
It complements the ‘SFIA AI Skills Framework’ (v9) by translating high-level competencies into accessible, role-based competencies for non-specialist users across sectors. It also aligns with SFIA’s emphasis on functional capability, responsibility, and ethical AI use, while addressing the needs of individuals and organisations that are early in their AI adoption journey
It complements the ‘AI Skills for Business Competency Framework’ developed by the Alan Turing Institute. The Turing framework focuses on role-based competencies across the AI lifecycle and supports long-term professional development, setting out categories such as the AI worker (using AI in daily tasks), the AI professional (developing and deploying AI systems), and the AI leader (providing strategic oversight and governance). The AI Skills Framework in this report complements this by translating these higher-level categories into practical, job-level (entry, mid, and managerial) guidance, and grouping skills into technical, responsible and ethical, and non-technical domains. In doing so, it bridges strategic, lifecycle-focused competencies with accessible, employer-facing tools that can be applied by non-specialists, SMEs, and training providers.
The framework is designed to be adaptable across sectors, roles, and organisational contexts, allowing users to modify or apply it based on specific roles and workforce needs. It can be used by non-specialist AI users across sectors, including public services, small and medium-sized enterprises (SMEs), and community organisations to support AI upskilling at foundational, operational, and strategic levels.
The framework primarily centres on skills needed to use, apply, and oversee generative AI (GenAI) in real-world settings, such as:
- text generation
- image generation
- summarisation
- AI-generated recommendations
While the skills at each level should be tailored to the tasks and responsibilities associated with that role, it is important to note that the framework is cumulative. Individuals at higher levels (for example managers or executives) are expected to possess not only strategic competencies, but also core skills found at lower levels, such as:
- AI literacy
- prompt writing
- output evaluation
This reflects the reality that leaders, who increasingly engage directly with AI tools themselves, should model effective and ethical use for others. They should help set organisational strategy and policies. Likewise, individuals in mid-level roles often act as bridges between operational and strategic contexts, requiring a blend of AI skills.
This layered approach to skills across organisational levels reinforces the framework’s participatory and user-centred methodology. AI upskilling is not confined to technical specialists or junior staff, but is instead distributed across all organisational roles. Hence, the framework embodies a practice-based and inclusive logic. This ensures that individuals in strategic roles remain grounded in core competencies, enabling them to model ethical AI use, support cross-functional collaboration, and respond to real-world challenges with agility and accountability.
Technical AI skills
Technical AI skills are defined as the practical, applied competencies required to operate, monitor, and guide AI systems effectively in real-world settings. These skills do not necessarily involve building algorithms from scratch, but rather developing, monitoring, and guiding AI systems in real-world settings. During workshop discussions, these technical AI skills were cited by employers, SME leaders, and training providers.
Stakeholders stressed that technical AI skills should be taught in context. Learners and businesses need to understand how these tools apply to their specific job roles, tasks, and responsibilities, not just through abstract or vendor-led courses.
Examples of technical AI skills
| Skill area | Brief explanation | Why it matters |
|---|---|---|
| Write structured prompts | Create prompts that guide generative AI tools to produce relevant, accurate, and appropriate outputs | Ensures the quality of AI-generated content, especially in communication-heavy or customer-facing roles |
| Use low-code automation tools | Automate routine workflows using low-code AI platforms (for example tools that allow users to build conditional workflows or automate data tasks through configurable, script-light interfaces) | Enables efficiency without requiring coding knowledge, supporting productivity in SME, admin, and public sector roles |
| Analyse data using AI tools | Analyse data trends and generate insights using AI dashboards or analysis tools | Supports evidence-based decision making without needing sophisticated data science expertise |
| Apply AI tools in workflows | Use AI tools within existing job-specific systems or service provision processes | Increases relevance and usability, encouraging uptake across diverse workplace contexts |
| Set AI tool settings | Modify AI tool settings to suit specific tasks or improve accuracy | Enhances tool effectiveness and personalisation, particularly useful in dynamic or user-facing environments |
| Set AI tools for task automation | Set up AI tools to perform repetitive digital tasks such as scheduling, reporting, or email responses | Reduces repetitive workload and enables staff to prioritise higher-value responsibilities |
Responsible and ethical AI skills
As AI becomes integral to workplace systems, particularly critical sectors, ethical AI skills are essential. Responsible AI skills are defined as the ability to uphold ethical principles, ensure transparency and accountability, assess bias, and apply legal and regulatory standards when using AI tools. They enable individuals to use tools effectively, challenge AI outputs, protect the rights of individuals, and maintain professional standards. This was considered non-negotiable in high stake environments where AI affects individuals.
Responsible AI, that’s the thing that’s on the rise. Because at the moment, boards know that their organisations are deploying AI, but who at the executive level is responsible? Maybe the CTO, probably not the right person. There’s definitely a growing area here, but there may not be many people who have a background in AI and ethics.
— AI governance and policy advisor
These skills are particularly important in relation to agentic AI systems. Agentic AI refers to systems capable of autonomously executing tasks and making decisions based on user-defined goals, with minimal human supervision. Experts noted that in many workplaces, individuals are not equipped to critically assess or challenge AI-generated outputs, often assuming that tools are neutral or infallible. Embedding ethical and governance competencies into AI training ensures that technology supports, rather than overrides, human judgement, especially in roles where trust, fairness, and care are central.
Examples of responsible and ethical AI skills
| Skill area | Brief explanation | Why it matters |
|---|---|---|
| Identify bias in AI outputs | Detect when AI-generated results may be unfair, discriminatory, or misleading | Prevents harm and supports fairness and inclusion in decisions affecting individuals and communities |
| Apply data protection principles | Follow GDPR, consent, and governance requirements when using AI systems that involve personal data | Ensures legal compliance and safeguards individuals’ rights and organisational reputation |
| Guide responses to inappropriate AI decisions | Intervene or escalate when AI outputs are ambiguous, incorrect, or risk causing harm | Protects against over-reliance on AI in sensitive contexts such as healthcare, education, or recruitment |
Participants identified two risks from insufficient ethical AI skills:
- hesitation in adopting AI due to concerns over managing ethical risks
- uncritical reliance on AI where it is used, assuming vendors handle ethics
Both scenarios hinder safe adoption and increase harm risks, especially in high-stakes roles.
Non-technical AI skills
Participants highlighted that non-technical AI skills are the most urgently needed.
Non-technical AI skills are defined as the foundational, transferable competencies needed to understand, engage with, and critically evaluate AI tools for efficiency, even without technical expertise. While grouped as non-technical, some skills, such as AI strategy planning, involve advanced leadership, governance, and organisational decision-making. Development of these skills is particularly important for older workers, low-income adults, jobseekers re-entering the workforce and employees in junior positions, who may lack confidence or exposure but not potential.
That critical thinking, that subjective reasoning, not just how to actually develop or leverage AI. That’s a notable issue, probably not addressed as much as the technical AI aspect.
— AI course instructor working with corporate clients
Examples of non-technical AI skills
| Skill area | Brief explanation | Why it matters |
|---|---|---|
| Use AI in your role and describe its relevance | Communicate what AI is and how it is relevant to your day-to-day responsibilities | Helps build confidence in AI use and reduces resistance or misconceptions in the workplace |
| Apply new AI tools to assess their use in your tasks | Apply AI tools to assess how they might improve or support your work | Encourages innovation, ownership of learning, and responsiveness to emerging AI systems |
| Adapt to new AI tools and contribute to peer learning | Adjust to using new tools and help others learn through informal knowledge sharing | Builds team competencies and promotes inclusive upskilling, particularly where formal training is limited |
| Plan AI strategy and responsible use expectations across departments with staff and partners | Plan and share policies, goals, and best practice guidelines for using AI in your team or organisation | Promotes clarity, trust, transparency and accountability around AI use, especially in leadership and coordination roles |
These AI skills form the foundation for all other upskilling efforts, especially for those furthest from the digital economy. Experts described these traits as enablers rather than end goals. Individuals with high adaptability and AI literacy are better positioned to take up training and apply it meaningfully in work and life. Experts stressed that these skills should be nurtured deliberately, not assumed. This is especially in adult learning, return-to-work programmes, and community-based training initiatives.
How to interpret the job level
This framework uses three contextual levels to reflect how individuals typically engage with AI in their work. These are:
- Entry-level
- Mid-level
- Managerial level
These levels are not tied to specific sectors or job titles. Instead, individuals and organisations are encouraged to select what best reflects their role, responsibilities, and learning needs. Individuals may span more than one level depending on their role. These categories are intended as flexible guides to help identify relevant skills, not as rigid classifications.
Prompts to guide level selection
These prompts can be used to identify what skill level:
- entry-level or individual use: you mainly use AI tools for your own tasks (such as support staff, students, early career professionals)
- operational or mid-level: you guide others in using AI and help embed AI tools within team or departmental workflows (such as team leaders, supervisors, coordinators, or educators)
- strategic or managerial level: you make decisions that shape how AI is adopted across the organisation, including policy, investment, and governance (such as directors, senior managers, business owners, or public officers)
AI skills framework
The AI skills framework shows the different AI skills associated with three levels of experience.
AI skills framework in a table format.
Entry-Level
Technical AI skills at entry-level are:
- write prompts for AI tools
- operate embedded AI features (for example, autocomplete, transcription tools)
- perform routine digital tasks (such as emails) using AI tools
Responsible and ethical AI skills at entry-level are:
- assess accuracy and appropriateness of AI for tasks
- identify bias in AI outputs
- assess risks in AI-generated decisions
- apply data privacy practices
- apply data protection guidance
Non-technical AI skills at entry-level are:
- use basic AI tools to complete routine tasks
- test AI tools for application in your tasks
- apply new AI tools to support daily tasks
- assess when to seek support in using AI tools
- provide basic observations about AI results to colleagues
- prepare for training on AI tool use
Mid-level
Technical AI skills at mid-level are:
- use AI tools in job-specific workflows
- apply AI tools to role-specific tasks
- use low-code AI platforms for automation
- create basic dashboards or scheduling tools using AI features
Responsible and ethical AI skills at mid-level are:
- evaluate AI-generated content for accuracy and relevance
- identify bias in AI outputs and apply corrective measures
- assess AI outputs and apply professional judgement
- guide ethical decision-making in team settings
- apply relevant policies and frameworks to ensure responsible AI use in your role context
Non-technical AI skills at mid-level are:
- coordinate AI use with colleagues in shared processes
- use AI insights to improve service provision or decision-making
- provide feedback to improve AI use within teams
- apply new AI tools and support peer learning
Managerial level
Technical AI skills at managerial level are:
- manage AI integration into core service provision processes
- manage AI-supported automation systems across functions
- monitor AI use across service areas
Responsible and ethical AI skills at managerial level are:
- guide ethical use of AI systems using policies and standards
- manage GDPR and data ethics compliance in AI-supported processes
- define accountability for AI use within teams or departments
- apply equity, inclusion, and transparency principles to team AI use
- assess long-term risks and trust issues related to AI use
Non-technical AI skills at managerial level are:
- use AI tools aligned with team or service objectives
- plan AI strategy and responsible use expectations across departments for staff and partners
- train staff in professional development on AI tool use
- evaluate scalability and long-term business value of AI solutions
- plan investment decisions around AI tools and skills
- guide staff in responsible, effective AI use
- coordinate partnerships to extend AI capacity
Interpreting the AI skills by job level framework with care
This framework is intended as a foundational tool to support inclusive, role-sensitive AI upskilling. It reflects cross-cutting AI-related competencies at different job levels, based on insights gathered from 6 national stakeholder workshops and a roundtable. The framework does not currently focus on specific occupations or sectors. Technical and ethical demands vary significantly between contexts such as Health and Social Care, Finance, Digital and Technology and Creative industries and details are not captured at this stage.
The framework should be seen as a starting point, designed to inform initial planning and curriculum development and should be adapted to the needs of industries and regions as part of broader Local Skills Improvement Plan (LSIP) and employer-led processes. Given the rapid pace of AI development, skill needs will evolve quickly. Future versions of this framework may incorporate additional skill domains, such as regulatory competencies or AI procurement, as workforce demands shift.
This version is best used as a flexible guide that can be tailored and expanded, rather than a fixed framework. It should be revisited regularly as part of wider strategic planning to ensure its ongoing relevance. As AI tools and workplace applications continue to evolve, the framework should be treated as an iterative resource. Establishing a mechanism for periodic review would support its long-term relevance and ensure alignment with emerging technologies, job roles, and workforce needs.
AI skills adoption pathway model
AI skills adoption pathway model as a flowchart.
The limited employer understanding of workforce AI upskilling needs was identified as one of the major barriers to AI skills development. AI skills adoption is not a single-step decision but a staged process that evolves over time.
The AI Skills Adoption Pathway Model outlines 9 progressive stages of AI adoption, from exploration to full integration. It shows how skill needs evolve at each stage and links directly to the AI Skills Framework by identifying when different types of AI skills (technical, responsible and ethical, non-technical) become more critical as organisations mature in their use of AI. While not all organisations will follow this path linearly, recognising where they currently sit can help inform practical planning and decision-making.
The AI Skills Adoption Pathway Model and the AI Skills Framework work together support inclusive and practical upskilling across the workforce. The pathway outlines how organisations typically progress through stages of AI adoption, from initial awareness to strategic scaling. The AI Skills Framework complements this by providing a role-sensitive structure that identifies relevant technical, responsible, and non-technical skills for individuals at different levels. Hence, by linking organisational readiness with individual competencies, these tools help employers and training providers plan workforce development in a way that is aligned, responsive, and inclusive.
This model outlines nine interlinked stages which reflect how organisations typically encounter, experiment with, and embed AI. These stages are not strictly sequential, some may occur in parallel or repeat, but taken together, they help organisations and policymakers benchmark progress and identify priority actions.
1. Awareness
Organisations begin to encounter AI through:
- news
- peer discussions
- business events
- early market exposure
This stage is about understanding what AI is, including common tools such as generative AI, and considering its potential relevance. There is typically limited technical knowledge, but growing curiosity and strategic interest. In addition, basic non-technical AI skills are essential to initiate organisational interest.
2. Exploration
Employers begin identifying areas in their operations where AI could offer value, such as:
- reducing administrative burden
- enhancing data use
- improving communication
This stage often involves:
- informal discussions
- internal scoping
- exploratory research
Staff or managers may start identifying practical problems AI could help address. Staff begin drawing on non-technical and responsible and ethical AI skills to identify ethical opportunities and risks associated with early AI use cases.
3. Assessment
Before trialling AI tools, organisations assess their current available (or lack of) in-house AI skills. This stage involves reviewing whether teams have the knowledge, digital literacy, or support needed to identify appropriate AI use cases. Organisations need to assess existing technical, non-technical, and responsible AI skills across job levels to determine readiness and skills gaps. By pausing to assess readiness, employers are better positioned to experiment effectively and responsibly.
4. Experimentation
Organisations test AI tools in low-risk, practical settings, for example, using ChatGPT to summarise emails or generate draft documents. Experimentation is typically limited to individual teams or use cases. These trials help employers assess usability, relevance, and immediate barriers to adoption. Entry- and mid-level staff begin applying basic technical AI skills while building confidence and judgment.
5. Reflection
Early experiences are reviewed and shared. Employers begin identifying:
- lessons learned
- staff reactions
- AI skills needs
- ethical concerns
- technical limitations
This reflective process often surfaces main enablers (such as staff champions) as well as structural or cultural barriers that need to be addressed. Responsible and ethical AI skills become critical as teams evaluate the effect of AI use, assess bias, and identify ethical concerns or unintended consequences.
6. Upskilling
Once early opportunities and gaps are identified, structured training is introduced to support staff competency. This may include:
- basic digital and AI skills
- prompt design
- data awareness
- principles of responsible AI use
Building AI literacy and shared language around AI is central to this stage. Targeted development of technical, non-technical, and responsible and ethical AI skills supports wider staff engagement, especially for digitally excluded or hesitant groups.
7. Integration
AI tools that have proven useful are embedded into day-to-day systems, services, or processes. Formal oversight and support mechanisms (such as guidance, documentation, or IT integration) begin to emerge. AI use shifts from ad hoc to supported and planned. As AI becomes embedded in processes, mid-level roles need to use workflow-specific technical AI skills and support others in AI adoption.
8. Strategy
Organisations begin aligning AI use with wider priorities, such as service provision improvement or workforce development. Governance structures and accountability become clearer, and AI benefits are included in investment planning, risk assessment, and performance evaluation. Senior and strategic-level skills are required to align AI use with organisational goals, including governance, risk assessment, and investment planning.
9. Scaling
AI-enabled practices are extended across teams or service areas, supported by leadership buy-in, continued learning, and system-level coordination. Organisations invest in sustainability, inclusive access, and cross-functional collaboration. Scaling marks the transition from isolated pilots to embedded, strategic use of AI. Leadership must draw on organisational and sector-wide AI capabilities to scale adoption responsibly, ensuring inclusivity, oversight, and sustainability.
Organisations can use this model as a practical reflection tool to assess their current position, identify skills needs, and guide planning using the checklist prompts.
This model is intended to be used alongside the Employer AI Adoption Checklist.
- for those at earlier stages (for example Awareness, Exploration, or Experimentation), the checklist can help surface initial use cases, reflect on risks and opportunities, and identify where upskilling and capacity-building are most needed.
- for those further along (for example Upskilling, Integration, Strategy, or Scaling), the checklist supports structured workforce planning, coordinated use, and the embedding of responsible AI practices.
Employer AI adoption checklist
The Employer AI Adoption Checklist acts as a practical diagnostic tool. It provides structured prompts that help employers assess their current AI skills readiness, identify workforce gaps, and plan upskilling activities.
The checklist is aligned with the AI Skills Framework and encourages organisations to evaluate their capacity across skill domains, ethical considerations, support structures, and strategic alignment.
It is designed for organisations at any stage of exploring or adopting AI, especially generative AI (GenAI) tools such as ChatGPT, image generation systems, and document summarisation platforms. It supports reflection on both skills and broader AI adoption practices.
| Prompt area | Reflection question | Why it matters | Our status |
|---|---|---|---|
| Strategic alignment | How might AI support your current work or organisational goals? Do you have an AI plan to accompany your general business plan? | Links AI adoption to your existing mission and prioritises high-value use cases | Not yet started, in progress, confident or embedded |
| Experimentation and awareness | Have any teams or staff already started using AI tools (formally or informally)? | Surfaces bottom-up adoption and identifies internal champions or risks | Not yet started, in progress, confident or embedded |
| Skills and capacity | Have you identified which AI skills your staff need, and where there are gaps in their current competencies? | Builds the case for inclusive training and identifies literacy gaps | Not yet started, in progress, confident or embedded |
| Risk and responsibility | Have you considered the risks of using AI (for example data privacy, bias, misinformation)? | Ensures AI use is safe, fair, and aligned with public trust | Not yet started, in progress, confident or embedded |
| Equity and inclusion | Are all staff able to access and benefit from AI training and tools? | Addresses potential digital exclusion and supports fair access to innovation | Not yet started, in progress, confident or embedded |
| Support and guidance | Do your staff know who to go to for AI-related support or questions? | Builds internal capacity and prevents misuse through informal peer learning | Not yet started, in progress, confident or embedded |
| Evaluation and learning | Do you have a way to learn from early experiments or pilot uses of AI? | Supports iterative improvement and helps track what works | Not yet started, in progress, confident or embedded |
Who it is for:
- organisations of any size and sector exploring or scaling AI adoption, for example, SMEs, large enterprises, local authorities, charities, and public services
- professionals at all levels, including team leaders, HR professionals, operations managers, educators, innovation leads, and policy influencers
- individuals responsible for workforce development or ethical tech use in their organisation
How to use it:
- individually or in a team discussion
- in workshops, strategy meetings, or AI skills and adoption reviews
What it can lead to:
- identifying training needs
- informing AI adoption strategies
- supporting inclusive and responsible AI adoption
Examples of potential actions after using the checklist:
- a college adds AI ethics and skills awareness to staff CPD days
- an SME creates a short AI ‘starter pack’ and hosted drop-in sessions for employees
- a local council reviews its AI adoption strategy to include responsible AI principles
- a charity uses the checklist to prepare a funding bid for community AI training
The AI skills tools package offers a coherent and adaptable approach for employers seeking to assess, plan, and use AI skills development across their teams.
Example use case: Coordinated use of the three AI skills tools package in practice
A regional logistics company applied the AI Skills Framework when piloting generative AI for client reporting. Entry-level staff developed prompt-writing skills (technical), learned to flag inaccuracies (responsible and ethical), and provide basic observations about AI results to colleagues (non-technical). Mid- and manager level staff were responsible for coordination, compliance, and strategic planning.
Following the stages of the AI Skills Adoption Pathway Model:
- awareness - the company began its journey by learning about the potential of generative AI for enhancing service quality
- exploration - leadership identified client reporting as a feasible use case
- assessment - the AI Skills Framework was used to audit existing workforce capabilities, revealing technical and ethical gaps
- experimentation - a small-scale pilot was conducted with select teams
- reflection – a structured reflection process followed, incorporating staff feedback and identifying further training needs
- upskilling - the organisation initiated a targeted plan, aligned with the Framework’s job-level competencies
- integration – the company embedded AI tools within client reporting workflows and aligned their use with operational goals
- strategy - managers revised internal governance and investment decisions to support responsible AI adoption
- scaling - finally, the company set objectives for expanding AI use across multiple business units
Using the Employer AI Adoption Checklist, the business evaluated AI’s alignment with service improvement (Strategic alignment) and identified internal champions (Experimentation and awareness). Prompts helped structure a skills audit, address privacy risks, and plan support roles (Skills and capacity, Risk and responsibility, Support and guidance), enabling responsible, inclusive scaling.
This example illustrates how the AI Skills Framework (what), the AI Adoption Pathway model (when), and the Checklist (how) can be used together to inform real-world AI adoption and workforce development in a structured, inclusive, and context-sensitive way.