Research and analysis

Barriers to AI skills development

Published 29 October 2025

Applies to England

The UK government’s ‘Digital Inclusion Action Plan: First Steps policy paper’ and the Lloyds Bank 2024 Consumer Digital Index estimate that 1.6 million people in the UK are offline, and around a quarter of the UK population have the lowest level of digital capability compared to the rest of the population as categorised by Lloyds’ digital capability segmentation.

Experts in this research identified systemic barriers to AI training in the UK. These are rooted in practical, structural and cultural issues such as:

  • digital infrastructure
  • adult education
  • funding inequalities
  • institutional responsiveness

The barriers outlined below cut across sectors and regions and were reported by experts working directly with learners, employers and training providers.

Inconsistent use of the term AI Skills

Experts raised significant concerns about the lack of a shared or standardised understanding of what constitutes AI skills.

In some workplace settings, AI skills referred to operational fluency, the ability to use tools such as ChatGPT or Copilot to automate common tasks. In others, it implied more abstract knowledge, such as:

  • awareness of bias
  • data protection laws
  • ethical decision-making frameworks

Experts from higher education and policy noted that AI skills often served as shorthand for digital upskilling, regardless of whether the content was truly AI or even skills related.

This ambiguity created challenges across the learning and employment lifecycle. Curriculum developers found it hard to define learning outcomes or assessments for AI skills without established frameworks, leaving tutors to rely on tool demonstrations or generic digital tasks.

In the labour market, learners who completed AI training often lacked the language or evidence to describe what skills or knowledge they had acquired. Experts cited examples of jobseekers who were told they had been trained in AI but could not explain the difference between a generative AI tool and a rules-based automation system. As a result, these individuals were unable to articulate their skills in applications, interviews, or workplace settings.

Employers reported they hesitated to commission training due to confusion over AI terminology, with some associating it with coding and machine learning, while others expected data dashboards or compliance training. This led to mismatches between employer needs and course provision, particularly in small businesses and public sector settings where in-house expertise was limited.

This challenge was intensified by the lack of clear distinctions between foundational, operational, and strategic AI skills in regional settings. Without trusted credentials, shared skills frameworks, or benchmarked training outcomes, some employers dismissed even well-designed AI courses as vague or irrelevant.

Experts identified lack of consistent language and signalling around AI competencies as a notable obstacle to learner literacy and employer investment. This restricted the coordination across training providers, qualifications bodies, and industry.

Lack of foundational digital literacy

Experts described foundational digital literacy as the most persistent and under-addressed barrier to inclusive AI upskilling. In nearly every workshop, organisations offering AI or digital skills programmes reported that a significant proportion of learners struggled with basic tasks such as:

  • typing
  • creating or saving files
  • managing passwords
  • understanding browser functionality

Digital skills gaps were common among:

  • older adults
  • individuals with learning differences
  • low-income adults with limited access to devices

Experts explained that even in areas with strong interest in AI training, learners dropped out when platforms, course materials, or introductory modules assumed baseline competence with digital systems.

If you have not got those basic level digital skills, AI is just causing a fear of overwhelm. Really any AI skill development needs to come as a progression from having secured the basic digital skills.

— Head of Digital Inclusion

The digital divide was also worsened by hardware differences. Learners attending library or community based AI sessions often had only limited experience using a desktop or laptop, having previously accessed the internet primarily through smartphones. This created confusion when course interfaces did not adapt to mobile, or when learners were asked to complete tasks that assumed keyboard proficiency or file system navigation.

The issue was particularly visible in training models that used pre-designed platforms or digital learning environments. Experts reported that in many cases, learners disengaged within the first hour of an AI skills course because they were unable to complete registration, navigate the dashboard, or open the training interface. In some cases, learners hesitated to ask for help due to embarrassment or previous negative experiences.

Workshops revealed a gap between learner readiness and training design, with foundational digital literacy often missing. This limits learner progression and hinders inclusion efforts.

Fragmentation across the training ecosystem

Experts identified the AI training landscape as extensive but fragmented, offered by a mix of:

  • libraries
  • FE colleges
  • universities
  • community learning hubs
  • online platforms
  • employer-led schemes
  • jobcentres
  • community organisations

This diversity lacks coordination, creating a disjointed system with limited accessibility and alignment.

In some areas, learners could access multiple training options, but these were often duplicative, hard to compare, or disconnected from local labour market needs. In other areas, particularly rural communities, AI training was described as absent or generic, leaving individuals to rely on word-of-mouth or trial-and-error to find relevant training.

Promising initiatives often operated in isolation and were unrecognised within national or regional frameworks. Several participants described delivering successful pilots that had generated strong learner outcomes, but were never continued or scaled due to lack of policy visibility, funding continuity, or cross-provider referral systems​.

The absence of shared standards, cross-provider tracking, or referral protocols meant that learners could move between different providers without a clear progression pathway. In some cases, learners completed self-guided AI training online, only to find that it was not recognised by local employers or connected to further qualifications.

Experts also highlighted that the fragmentation was not only horizontal (between providers in the same region), but vertical, with national initiatives, regional skills partnerships, and local delivery organisations working on separate timelines, goals, and assumptions. Training linked to innovation hubs or national AI accelerators often failed to meet the practical AI needs of entry-level learners.

This disjointedness had direct effects on learners. Experts described motivated adults giving up after encountering confusing signposting, inaccessible eligibility criteria, or unclear learning progression. In some communities, the cumulative effect of these frustrations led to a perception that AI training was not for people like us.

Curriculum responsiveness to emerging AI needs

Experts in education and training highlighted slow curriculum updates throughout education pathways, rigid continuing professional development (CPD) processes, and qualification alignment issues as notable obstacles to incorporating AI content promptly.

We have a huge gap not just now in terms of AI skills but also this gap is expected to continue in the future because at the moment, our curriculum, in schools and universities doesn’t have anything on [AI].

FE digital learning lead

Tutors and programme leads noted that the lengthy process of incorporating tools into course content often led to learners encountering outdated platforms or interfaces.

Draft modules or micro-credentials designed to cover topics such as generative AI or responsible automation were delayed or deferred due to internal quality assurance schedules. One provider noted that curriculum changes were only reviewed annually, making it difficult to respond to sectoral needs in a timely manner.

In employer-led training, particularly in regulated sectors, experts noted that CPD accreditation systems did not always recognise experimentation or continuous updating of digital content. Even experienced educators faced barriers in adapting their teaching to reflect new developments, due to limited CPD time, absence of incentives, or lack of protected hours for updating content. Some experts also observed that innovation in AI education was often shaped by external compliance or funding directives, rather than internal drive.

Balancing quality assurance with agility is crucial as AI’s rapid evolution outpaces traditional curriculum cycles. Learners often encounter tools in practice before they are formally addressed in training, highlighting a gap between real-world experience and structured learning.

Despite widespread enthusiasm among educators and trainers, many felt that the surrounding systems lacked the flexibility to support dynamic content integration. These systems included:

  • governance
  • validation
  • institutional structures

This was not a reflection on individual competency, but rather a signal that broader curriculum systems should evolve to meet the demands of a fast-changing AI landscape.

Training costs and funding fragility

Experts identified affordability and financial continuity as fundamental barriers to sustained engagement in AI training with many programmes limited by temporary funding cycles.

Community organisations, charities, and local learning providers consistently reported:

  • reliance on short-term grant funding
  • time-limited pilot schemes
  • ad hoc regional innovation pots.

This funding instability created challenges in:

  • planning provision
  • hiring and retaining trained tutors
  • building trust with learners
  • delivering learning with continuity

Funding is a big barrier. If the funding is not there for education to upskill the community, often our communities get left out.

— Community organisation representative

In some regions, training was launched with enthusiasm, only to be discontinued or redirected when funding expired. Experts working with low-income adults noted that learners frequently disengaged when a programme ended without follow-on support, especially where participation had required a high level of literacy-building or outreach to begin with. This pattern was described as a start-stop-start-stop cycle that undermined both learner motivation and provider effectiveness.

In addition to tuition costs, indirect costs were regularly cited as barriers. These included:

  • travel
  • digital devices
  • broadband access
  • childcare
  • missed income

While some learners had access to free training spaces, they could not afford to attend consistently or complete the full programme.

Limited employer understanding of workforce AI skills needs

Experts involved in training provision and employer engagement consistently described a lack of clarity from businesses regarding what AI skills they needed. Even employers using AI tools in recruitment, marketing, or customer service were often unaware of what type of staff training would support safe and effective adoption. This lack of strategic foresight led to underinvestment in skills development, especially in SMEs and early-stage start-ups. AI training has been found to offer measurable positive outcomes in SMEs including increasing digital competencies and reduction in digital divide.

Employers don’t know what AI skills are needed for their business, so the training providers act as consultants, tell me your business plan, your people, and we’ll design an AI plan based on that. But many don’t have one.

— Workshop participant working with SMEs on AI planning

Where AI training was provided to employees, it was often vendor-led, either emphasising theoretical learning or tool usage without broader context. One expert working with SMEs noted that their organisation had begun requiring an AI plan from employers before designing training, highlighting how rarely employers had defined their goals or staff needs in this area.

Several workshop participants emphasised the absence of consistent tools or frameworks to help employers diagnose their own AI skills needs. One community-based provider noted that many businesses just don’t know where to start when it comes to understanding the practical workforce implications of adopting generative AI tools.

Others described the gap between employer enthusiasm and operational planning, saying that there’s appetite, but no translation into what it means for roles, teams, or training budgets. Experts also highlighted a missed opportunity to link AI adoption with broader workforce development strategies, particularly in sectors such as Health and Social Care, where automation pressures were increasing but staff training lagged behind. This pointed to a clear demand for practical skills diagnostics and workforce planning tools that can guide employers from intention to action.

In larger organisations, experts noted that AI experimentation was sometimes led by innovation teams or department heads, with limited communication to HR or learning and development staff. This disconnection led to uneven internal awareness and poorly aligned skills strategies.

SMEs were more likely to explore AI out of necessity to reduce costs or automate processes, but lacked formal plans to assess staff readiness or plan ongoing training. In addition, the absence of internal AI champions, HR-led guidance, or sector-specific training frameworks was seen as contributing to confusion and underinvestment across multiple sectors.

These challenges echo national-level findings in the ‘Pissarides Review into the Future of Work and Wellbeing’. This highlights that many UK firms, especially SMEs, are not ready for responsible AI adoption due to limited strategic planning and weak internal governance. The review stresses that firms with high-involvement HR and skills strategies achieve better outcomes for both productivity and worker wellbeing, yet such practices remain the exception rather than the norm.

Without structured workforce planning around AI use, the benefits of innovation remain uneven, leading to:

  • underperformance
  • skills mismatch
  • missed opportunities for economic and social value