Sectoral overview
Published 29 October 2025
Applies to England
This section provides insights from employers, educators, and community organisations on AI adoption across the UK, drawing on their expertise across the 10 priority sectors identified by the ‘UK’s Modern Industrial Strategy’ and Skills England.
Skills England has chosen to focus on 10 priority sectors, which are:
- the 8 growth-driving sectors identified in the government’s Industrial Strategy Green Paper
- the construction sector
- the health and adult social care sector
Construction was included due to its importance in meeting the government’s housebuilding targets and the substantial skills shortages in the sector. The health and adult social care sector was selected as it faces persistent skills shortages and growing demand driven by an ageing population.
Stakeholders reported varying levels of AI adoption across sectors, along with different challenges, occupational outcomes, priority skills, and barriers to training.
The findings highlight sector-specific use cases for AI, training gaps, and workforce needs, providing valuable input for place-based skills planning and sector-specific AI upskilling efforts. The depth of evidence varies across sectors. In sectors such as Health, and Digital and Technology, rich insights were gained, while in Defence and Life Sciences, available evidence is more limited.
Sector-specific AI skills needs and barriers
This table provides a summary of the sector-specific AI skills needs and barriers.
| Sector | AI adoption patterns | AI skills gaps areas | Main barriers |
|---|---|---|---|
| Digital and Technology | Automation, coding help, predictive analytics, content checks, personalised user experience (UX) | Using low-code tools, explaining AI outputs, designing for inclusion, using AI responsibly in products | Training too technical, poor support for women and non-technical staff, limited options for older workers, career returners, and people outside main hubs |
| Health and Social Care | Triage, diagnostics, admin tasks, early warning systems; NHS aims to be the most AI-enabled health system in the world. | Ethics, interpreting AI outputs, teamwork across clinical, admin, and care roles | Poor digital infrastructure, system problems, lack of training, digital exclusion |
| Financial Services | Fraud checks, monitoring, trading, credit scoring, compliance | Governance, ethics, interpreting AI outputs, especially in compliance and legal teams | Time pressure, limited tailored continuing professional development (CPD), ignoring non-technical risks, siloed teams |
| Advanced Manufacturing | Predictive maintenance, process control, robotics, real-time analytics | Model training, predictive maintenance, interpreting AI outputs, ethical use and implications of automation, inclusive design | Entry-level shortages, ageing workforce, small and medium-sized enterprises (SMEs) lacking funds, digital tools, and training |
| Construction | Drone surveys, planning, retrofit, virtual reality (VR) and augmented reality (AR) safety tools, Building Information Modelling (BIM) for green design | Drones, BIM, using AI on site, ethical use of surveillance, inclusive design | Low digital skills, limited CPD, digital exclusion, limited SME capacity |
| Professional and Business Services | Human resource recruitment, workforce management, legal reviews, contract checks | Auditing bias, compliance, communicating AI outputs in legal work | Limited continuing professional development, limited SME support, fewer training options for smaller local firms than large city firms |
| Creative Industries | Generative AI for content, campaigns and storytelling | Prompt writing, copyright, originality, ethical storytelling | Limited formal continuing professional development, copyright uncertainty, poor training access for freelancers and small firms |
| Clean Energy Industries | Predictive maintenance, energy efficiency, grid forecasting, storage, trading, carbon capture | Optimisation, fault detection, dashboard interpretation, bias checks, identify bias in algorithms | High training costs, lack of role-specific training, regional gaps; large utilities move faster than SMEs and local groups |
| Defence | Logistics, intelligence, threat detection, simulation, battlefield support, predictive maintenance, cyber defence | Interpreting AI outputs, risk-based skills, ethics, transparency, accountability | Few staff trained in AI, gaps between civilian and defence training, hard to bring in outside experts |
| Life Sciences | Drug discovery, genomics, diagnostics, pharma production | Bioinformatics, diagnostics, reading outputs, teamwork, data transparency, fairness, compliance | Training too focused on long degrees, poor SME access, few AI trainers, unclear standards |
Digital and Technology
As stated in the ‘Digital and Technologies Sector Plan’, the UK’s digital and technology sector plays an important role in supporting economic growth. However, workshops experts highlights that challenges such as uneven skills provision, lower diversity, and limited support for non-technical roles risk holding the sector back.
In the Digital and Technology sector, AI is currently being used in areas such as:
- intelligent automation
- personalised UX
- coding assistance
- predictive analytics
- content moderation
This is reshaping tasks at all job levels, from entry-level Quality Assurance (QA) analysts using AI test tools, to mid-level project managers aligning timelines with AI forecasts, to senior leaders at managerial level shaping responsible AI strategy. The AI skills needed span technical (such as low-code development), non-technical (such as AI output interpretation), and responsible domains (such as inclusive design).
The 2025 ‘Nash Squared Digital Leadership Report’ found that 51% of the surveyed tech leaders reported an AI skills shortage, making these the most sought after tech skills.
Workshop participants reported that demand for technical roles such as machine learning engineers and data scientists is growing, but that critical gaps also exist in applied AI skills for adjacent roles (roles which are not explicitly AI-focussed but increasingly require some AI literacy or practical use of AI tools, such as project management and quality assurance).
Essential AI skills missing in this sector include:
- the ability to apply tools which require little to no knowledge of coding
- explaining AI outputs for operational decision-making
- ensuring inclusive design practices
- embedding responsible AI principles across product development cycles
Stakeholders also reported that many professionals in adjacent roles are not getting the AI skills training they need, limiting their ability to effectively work with AI development teams or integrate AI tools into their workflows. This is especially true for workers in early and mid-career stages.
An AI tool is a software application or platform that uses artificial intelligence techniques such as machine learning, natural language processing, or computer vision to perform tasks that typically require human intelligence.
Workshop participants reported that AI training programmes in this sector are highly technical and assume prior coding or engineering experience, creating entry barriers for individuals from non-technical backgrounds.
‘The 2024 Diversity in Tech report’ found that among the organisations surveyed, only 29% of the UK tech workforce are women and a small proportion are non-binary, suggesting untapped potential for talent. Experts also noted insufficient support for older workers, career returners, and those from low-income backgrounds who could transition into digital roles but are not served by current training pathways.
Flexible and inclusive AI skills pathways are essential for enabling progression into both technical and hybrid roles (roles combining technical expertise with domain-specific, managerial or creative responsibilities). This could help to overcome current barriers and support a wider range of talent within the sector. These should include regional provision models, accessible learning formats, and targeted support for underrepresented groups. This would ensure that AI competencies in the Digital and Technology sector are developed equitably across the workforce.
Health and Social Care
Healthcare providers are exploring the use of AI in:
- triage systems
- diagnostic support
- voice recognition
- administrative automation
AI is seen as a way to help ease pressure on frontline services, particularly in mental health and primary care.
In 2023, the UK government at the time allocated £21 million to introduce AI tools in the NHS for faster diagnosis of conditions such as cancers, strokes, and heart diseases. More recently, the ‘10 Year Health Plan’ set the ambition to make the NHS the most AI-enabled health system in the world, signalling strong national commitment to embedding AI into future healthcare delivery. Examples of innovation include the Health AI Global Regulatory Network and AI-enabled early warning systems, which are being piloted to improve patient safety and accelerate regulatory learning.
These developments raise important ethical and operational considerations, such as ensuring fairness and accessibility for patients with non-standard speech patterns or atypical health profiles. This means that skills related to AI ethics are essential.
Research participants described how healthcare professionals increasingly need AI skills beyond direct clinical application, including:
- safe integration of AI systems into triage and diagnostics
- evaluating AI recommendations
- maintaining trust during AI-assisted care
There is also a growing need for collaborative skills across clinical, administrative, and managerial roles for overseeing AI decision-making.
Experts emphasised the importance of cross-role AI training for clinicians, managers, and administrators to ensure consistent standards of safety, trust, and ethical decision-making. Workforce development plans should reflect role-specific skill needs while also supporting collaborative AI engagement across multidisciplinary teams.
Training should prioritise:
- inclusive adoption
- ethical oversight
- evaluation across different roles and settings
In line with the ‘National AI Strategy’, organisations are beginning to explore AI in areas such as:
- digital alert systems
- safeguarding
- workload planning
- administrative support
While uptake is growing, workshop experts highlighted that many barriers remain logistical rather than cultural, such as:
- system interoperability
- limited training provision
- uneven digital infrastructure
Maintaining human oversight is particularly important for decisions involving vulnerable individuals. Improved training in ethical risk management and reliability testing will help ensure AI supports, rather than replaces, professional judgment. These insights highlight the need for tailored training for social care workers across both public and voluntary providers.
It is important that training pathways are developed to enhance digital literacy and support contextual decision-making using AI, with a focus on ethical use and the safety of end users. This responsibility is shared across industry, education providers, and employers, as well as government-led initiatives. Proposals from experts included embedding ethical frameworks into care sector qualifications and developing flexible CPD modules on the use of AI in safeguarding and care planning. These interventions could support safe integration while reinforcing trust in digital innovation. A good example is the new care leader qualification introduced in April 2025, which concentrates on AI-enabled technologies such as fall-alert systems, video telecare, and automated note-taking.
Financial Services
AI is changing the way work is done in Financial Services. Financial Services firms are applying AI in:
- fraud detection
- transaction monitoring
- algorithmic trading
- customer segmentation
- regulatory compliance
- expanded interest in AI for credit risk scoring and customer service automation
A survey of 118 financial firms in the UK, conducted by the Bank of England and Financial Conduct Authority in 2024, found that 75% of firms surveyed were already using AI, with a further 10% planning to use AI over the next three years. Adoption is highest in international banks (94%) and insurance firms (95%), while financial market infrastructure shows lower uptake (57%).
Early-career staff, such as customer service and support teams, now use AI tools in their daily tasks, for example by:
- reviewing system alerts
- using AI-generated insights to support client interactions
Mid-level staff, particularly in areas such as operations, compliance, and risk, are increasingly responsible for:
- checking how AI systems are performing
- identifying potential limitations
- communicating results clearly to colleagues and regulators
Senior managers are expected to oversee the responsible use of AI, ensuring it is:
- transparent
- meets regulatory standards
- aligns with the organisation’s long-term goals
AI is no longer a purely technical innovation, it is influencing how organisations operate, make decisions, and respond to regulatory and public expectations.
Workshop experts highlighted a persistent gap in AI competency across the workforce. This highlights a widespread skills deficit, particularly in:
- governance
- ethics
- critical interpretation and communication
These gaps are greatest in compliance and legal teams, many of whom are tasked with overseeing AI integration but lack the tools or training to do so effectively. The same holds for senior leadership, where there is often an overreliance on technical specialists and a limited understanding of the wider organisational and societal implications of AI use. While data scientists and technical staff receive some training, there is not enough training available for staff in other roles to learn about non-technical and responsible AI skills.
Barriers to upskilling in the Financial Services sector include:
- limited time and capacity for training
- lack of tailored CPD programmes aligned with regulatory and ethical essentials
- a general underestimation of the non-technical and ethical risks associated with AI usage
In fast-paced environments where regulatory compliance is important, many firms prioritise getting AI systems to work over deeper understanding or critical oversight of AI systems. In addition, isolated team structures can prevent knowledge-sharing between technical and governance staff, reducing opportunities for collaborative learning and organisational learning loops.
To address these gaps, workshop participants and industry stakeholders emphasised the need for more structured and inclusive workforce development plans. These could include:
- the introduction of AI awareness training targeted at board-level and senior executives
- integration of AI ethics and compliance modules into mandatory CPD schemes
- the development of cross-functional training pathways for mid-career professionals working at the intersection of technology, risk, and regulation
Advanced Manufacturing
AI is applied in many areas in the UK Advanced Manufacturing sector, including:
- automation
- predictive maintenance
- real-time analytics
- robotics
Yet, the sector faces a growing AI skills gap. The shortage of skills is worsened by an ageing workforce, with The Engineering Trust reporting that nearly 20% of the UK’s engineering professionals are expected to retire by 2026. The ‘2024 Workforce Census’ report confirms that a large portion of the workforce is set to retire in the coming years, representing a significant loss of knowledge and experience.
This is a particular problem in sectors such as:
- automotive
- aerospace
- precision engineering
Workshop experts explained that while large manufacturers may have the resources to trial and embed AI training, smaller firms face significant logistical and financial challenges.
Research shows that SMEs based out of large urban centres struggle with:
- limited broadband
- constrained access to financial services
- weaker peer-support networks for AI adoption
This affects their ability to train staff in role-specific applied AI skills. Existing employer training, where available, is often too abstract or too technical, failing to reflect the operational realities of fast-paced, high-volume factory settings.
The AI skills challenge goes beyond technical knowledge. Workshop participants emphasised the need for Advanced Manufacturing workers to engage with broader competencies required for responsible and inclusive AI use. These include:
- evaluating the employment and ethical implications of automation
- understanding AI biases in quality control systems
- managing change in multi-generational teams
The most pressing AI skills gaps in Advanced Manufacturing fall into three broad categories.
First, technical application skills, including:
- training AI models for predictive maintenance
- using AI systems for process control
- integrating real-time analytics into production
Second, non-technical AI skills, such as:
- interpreting AI outputs for operational decision-making
- adapting workflows in response to AI insights
- communicating changes to frontline teams
Third, responsible and ethical AI skills, such as:
- Understanding the ethical implications of AI
- ensuring transparent workforce communications
- embedding inclusive design thinking into Advanced Manufacturing systems
Addressing these challenges requires a rethinking of AI upskilling strategy at the organisational level. Effective pathways should emphasise:
- practical, role-specific learning, such as simulation-based training
- embedded AI tools that support on-the-job learning
These should be complemented by foundational training in:
- digital literacy
- data interpretation
- critical thinking
This will ensure that workers at all levels, not just engineers or data scientists, are equipped to navigate the AI-enabled manufacturing environment.
Construction
AI applications are slowly emerging across several areas in the Construction sector, including:
- drone-assisted surveying for land assessments
- AI-supported project planning and logistics
- retrofit design using AI-aided modelling
- the growing use of VR and AR for on-site safety simulations
The effect of AI on occupations in Construction is still emerging.
For entry-level and frontline roles, AI is beginning to influence task design through:
- digital planning
- real-time feedback tools
- automated hazard detection
Workshop experts explained that many workers lack the foundational digital skills to interpret or act on AI insights.
At the mid-career level, particularly among site supervisors and planners, AI is reshaping workflows through increased use of data modelling and automated scheduling, but the ability of using these tools is uneven.
Among managerial and sustainability-oriented roles, AI requires the ability to:
- assess retrofitting options
- interpret complex environmental data
- guide teams through AI-supported project transitions
There is limited support for professionals in these roles to gain relevant training.
Despite AI’s potential to enhance safety, sustainability, and productivity across infrastructure and housing projects, the sector remains one of the least digitally mature. AI uptake remains constrained by workforce digital literacy and limited integration into vocational pathways.
In 2024, The ‘2024 Lloyds Bank UK Consumer Digital Index’ showed that 65% of UK Construction workers were found unable to complete all 20 essential digital work tasks outlined in the ‘Essential Digital Skills Framework’. This is the lowest level across all sectors, indicating a significant digital skills gap within the industry. Applied training that supports on-site roles and addresses local infrastructure projects will be important to inclusive AI adoption in this sector.
Workshop insights also revealed an interest in using AI to support environmental performance and sustainability, particularly through AI-enhanced BIM tools. However, these technologies are primarily adopted by the main contractors on large-scale construction or infrastructure projects, while SMEs often lack the necessary expertise, training, and resources to embed AI-informed BIM into standard site practices.
The most pressing AI skill needs in Construction can be grouped into three domains.
First, technical AI skills, including:
- using drones for site mapping
- interacting with AI-supported project planning tools
- understanding data from BIM systems
Second, non-technical AI skills, such as:
- interpreting AI output and applying them in on-site decisions
- communicating change across teams
Third, responsible and ethical AI competencies, such as:
- understanding environmental effects of automation
- ensuring ethical use of surveillance tools such as site monitoring drones
- embedding inclusive design thinking into sustainability-oriented projects
At present, training across all three AI Skills domains is patchy and poorly integrated into national Construction qualifications or apprenticeships.
Several sector-specific workforce challenges were identified.
First is digital exclusion, particularly among:
- older workers
- those in trades-based roles
- firms in rural or economically disadvantaged regions
Second is a lack of applied CPD. Most existing AI training is not tailored to Construction or provided in accessible, on-site formats. Smaller firms often lack capacity or awareness of emerging training schemes.
Third, green transition pressures, where in particular the demand for retrofit and low-carbon design are exposing gaps in AI environmental planning skills.
These mismatches between emerging role requirements and current training provision risk deepening inequalities and hindering sector-wide productivity. To address these issues, workers need training designed for their specific jobs and locations. Workshop participants emphasised their preference for modular, flexible learning formats that integrate digital and AI tools into real-world Construction contexts, particularly for roles such as:
- retrofit coordinators
- planners
- forepersons
Partnerships between employers, training providers, and regional green skills initiatives are critical. These partnerships should aim to embed AI training into practical provision models using:
- digital simulations
- on-site coaching
- blended modules that align with net zero priorities
Developing inclusive CPD modules for both new entrants and existing workers can also support a more equitable and sustainable adoption of AI technologies. Workforce development strategies should ensure that the benefits of digital innovation are distributed across all tiers of the construction industry. They should not only aim to upskill individuals but also support employers in embedding AI tools responsibly.
Professional and Business Services
Many medium to large UK businesses use AI for recruitment and workforce management.
Across the HR profession, AI is reshaping roles at multiple levels:
- entry-level staff increasingly rely on AI tools for initial screening and candidate tracking, reducing administrative burden but requiring digital fluency and awareness of algorithmic limitations
- mid-level professionals are expected to manage AI-assisted recruitment processes and monitor tool performance
- senior managers should oversee collaboration
While these AI tools can improve efficiency, concerns were raised about the continuation of historical biases, particularly those affecting candidates who are:
- from ethnic minority groups
- women
- disabled
HR professionals need to develop skills in:
- auditing AI hiring platforms
- detecting algorithmic bias
- managing AI-assisted candidate evaluation
There is also a growing need for regulatory compliance skills related to automated decision-making.
Experts participating in this research commented that current training pathways offered by employers are insufficiently concentrated on these areas, particularly in SMEs, and should be expanded to include practical modules on fairness monitoring and responsible AI tool deployment. Experts highlighted that embedding AI literacy into HR qualifications and expanding training modules on algorithmic bias, ethical oversight, and responsible innovation may strengthen workforce preparedness. Additionally, practical case-based learning, supported by regional employer networks, was identified as a valuable method to build AI literacy among HR teams.
In legal services, experts described growing interest in AI applications for:
- document review
- contract analysis
- knowledge management
Workshop experts explained that many of the UK’s largest law firms are using AI while some of small law firms are exploring using the technology. However, adoption remains cautious due to concerns about:
- accuracy
- accountability
- the professional risks associated with AI-generated content
While technical integration may improve slowly, the need for legal professionals to critically interpret and communicate AI outputs, data governance, and risk mitigation is becoming more pronounced. These developments highlight the importance of cross-disciplinary training and ethical literacy across regional legal services provision.
Legal professionals increasingly need applied skills in:
- assessing AI-generated content
- managing digital evidence
- interpreting data outputs within legal frameworks
Skills gaps include:
- contract analysis using AI
- critical interpretation and communication of legal AI tools outputs
- managing client interactions involving AI outputs
- cross-functional legal-tech skills and scenario-based training which are needed to prepare lawyers for AI-integrated workflows
Despite growing interest, several barriers restrict effective AI upskilling in this sector. Many small HR consultancies and legal firms lack the time and financial flexibility to release staff for structured CPD programmes. In some cases, regional disparities also affect access, with urban firms having greater exposure to AI training pilots compared to those in smaller towns or rural areas. This results in inconsistent upskilling opportunities and slower competencies growth across the broader sector.
Workshop experts suggested that tailored training should include scenario-based exercises on using AI tools for:
- discovery
- compliance
- client communication
Greater coordination with legal education bodies could ensure that emerging AI competencies are embedded into training pathways for new entrants to the profession.
Creative Industries
The Creative sector is undergoing significant change led by generative AI tools. These systems are being used for:
- content creation
- campaign planning
- digital storytelling
Many freelancers and SMEs are using these tools without structured training, leading to concerns about quality assurance, originality, and reputational risk.
A report by the Creative Industries Policy and Evidence Centre found that nearly 60,000 Creative Industries workers were not fully proficient in their roles. The same report highlighted that 65% of hard to fill vacancies in the industry were attributable to skills shortages, particularly related to the introduction of new technology, including AI.
Creative workers urgently need practical skills in using generative AI tools for:
- creative content production
- prompt-based campaign development
- ethical digital storytelling
Many lack the ability to critically evaluate AI outputs for quality, originality, and audience alignment.
The ‘AI, Copyright, and Productivity in the Creative Industries policy brief’ shows that sector-specific training should develop skills in:
- AI-enhanced creativity
- digital rights management
- managing the reputational risks associated with synthetic media
The ‘Time to ACCCT report’ highlights a risk of creators increasingly relying on generative AI tools without clear understanding of legal obligations or the origins of AI-generated content, heightening risks around reputational damage and copyright infringement. This gap is compounded by inconsistent guidance across the sector and limited training resources tailored to creative professionals. To address these challenges, there is a need for coordinated efforts to develop processes that will support ethical and commercially viable use of AI in creative production, such as:
- standardised training pathways
- practical AI toolkits
- clear copyright frameworks
The ‘Creative Industries and Gen AI report’ highlights that there are growing concerns about the uneven adoption of AI across the UK’s Creative Industries, particularly for freelancers and microbusinesses, who face barriers to:
- formal training
- ethical guidance
- career progression
Many freelancers and SMEs also lack access to formal CPD opportunities in areas such as:
- AI ethics
- copyright compliance
- data transparency
Most learning on AI is informal, leaving gaps in:
- quality control
- intellectual property (IP) understanding
- ethical literacy
The workshop experts reported an urgent need for accessible, scenario-based training and regional peer-support networks to ensure responsible and inclusive AI use. These findings underscore the importance of targeted upskilling pathways that reflect the sector’s fragmented structure and support ethical innovation across all career levels.
There is a need for dedicated support for creative professionals, particularly those in micro-enterprises and freelance settings. Additional recommendations included the development of creative AI toolkits tailored to independent professionals, as well as mentorship and peer learning schemes coordinated through regional creative hubs.
Clean Energy Industries
Clean Energy Industries are increasingly using AI tools to support the UK’s transition to net zero. Applications include:
- predictive maintenance of Clean Energy Industries assets
- optimisation of energy efficiency
- forecasting of Clean Energy Industries generation
The ‘International Energy Agency report’ shows emerging use cases are also appearing in:
- energy trading
- storage optimisation
- carbon capture monitoring
However, the ‘Skills England: Driving growth and widening opportunities report’ shows that adoption remains uneven. Large utilities and grid operators are moving faster, while SMEs, local authorities, and community-scale initiatives often lack the digital capacity to benefit fully.
AI is reshaping many operational and technical roles across Clean Energy Industries, yet workforce development and structured training have not kept pace. AI and automation may reduce demand for certain routine roles, such as maintenance staff or traditional data analysis. Organisations are struggling to attract workers with the applied skills needed to support adoption. Formal CPD opportunities remain scarce across most job levels, particularly for mid-level and operational staff.
Professionals in Clean Energy Industries increasingly require targeted technical AI skills in areas such as:
- real-time optimisation
- fault detection
- predictive maintenance
- dashboard interpretation
- bias correction in local forecasting models
At the same time, responsible AI skills are essential. Workers at all levels should be able to identify bias in retrofit algorithms, ensure transparency in forecasting systems, and uphold ethical governance in automated decision-making. Senior professionals, in particular, need competencies to align AI adoption with decarbonisation and resilience goals, but ethical awareness should extend across the whole workforce rather than being confined to leadership roles.
AI training gaps are most acute for:
- SMEs
- retrofit supply chains
- local government
Barriers include:
- high costs of specialist courses
- limited regional provision
- a lack of role-specific pathways
Regional disparities are particularly pronounced. Urban innovation hubs have greater access to training, while rural and economically disadvantaged areas report minimal provision. Without action, these inequalities risk slowing progress towards the UK’s net zero commitments.
AI itself contributes to rising energy demand, adding further strain to the electricity grid. This raises ethical questions about balancing AI deployment with clean energy goals. At the same time, there are opportunities to explore sustainable practices, such as re-using waste heat from data centres for low-carbon district heating. Importantly, emerging evidence from the ‘impact of growth of data centres on energy consumption’ report suggests that in some cases AI can be more energy-efficient than the physical alternatives it replaces. For example, AI translation and ebooks use far less electricity than their human or physical counterparts, highlighting the potential for AI to deliver net environmental benefits if managed well. Embedding these considerations into workforce training and sector planning will be vital to ensure that AI adoption supports, rather than undermines, the UK’s transition to net zero, maximising the environmental benefits that AI can deliver.
Addressing these challenges requires accessible and inclusive CPD that reflects the diversity of roles across Clean Energy Industries. Training provision offered by employers should include:
- scenario-based learning
- applied modules on AI for energy forecasting and optimisation
- pathways that embed AI awareness into Clean Energy Industries qualifications
Tailored toolkits, peer-learning schemes, and AI literacy programmes for non-specialist roles would support more equitable adoption across the workforce. The AI-Energy Council could help scale these initiatives by coordinating public–private partnerships and ensuring consistency across regions.
Defence
The UK’s ‘Defence Artificial Intelligence Strategy’ (2022) aims to make Defence AI -ready by improving skills, partnering with industry and academia and upgrading technology.
The ‘Developing AI capacity and expertise in UK defence report’ shows that AI is increasingly being used in:
- logistics
- intelligence analysis
- autonomous systems
- threat detection
- simulation-based training
Information from tech UK shows that AI is being explored for:
- battlefield decision support
- predictive maintenance of Defence assets
- cyber defence automation
This underscores the strategic importance of AI across a range of operational and support functions.
The ‘Defence Industrial Strategy: Making Defence an Engine for Growth’ report highlights that AI can be used to secure battle winning advantage and establishing an end-to-end process for scaling innovative solutions across the supplier ecosystem.
These developments are reshaping workforce expectations at all levels.
Operational staff are now required to interact with AI-informed systems, adapting their roles to incorporate data-informed insights and semi-automated tools.
Analysts should be prepared to work alongside machine learning models, interpreting outputs to inform critical decision-making.
Senior leaders, meanwhile, need a working fluency in:
- AI strategy
- risk management
- the implications of emerging technologies
Despite this shift, most current Defence roles are not formally structured around AI competencies, creating a gap between operational needs and existing workforce models.
AI skill needs span technical, non-technical, and responsible and ethical domains. Defence personnel are expected to understand how to:
- interpret AI outputs
- assess the reliability and potential bias in AI models
- operate effectively within secure, highly regulated data environments
There is a particular shortage of competencies linked to responsible AI, given the high-risk nature of many Defence applications. These shortages are seen especially in:
- accountability
- transparency
- critical interpretation and communication of AI outputs
There is also a shortage of AI-skilled professionals who are familiar with the specific requirements of Defence settings.
Meanwhile, civilian AI training pathways are not always aligned with the classified, operational, and ethical constraints of Defence work, making it difficult to integrate external expertise without extensive retraining. Barriers to upskilling compound these challenges.
To address these issues, the Defence sector would benefit from targeted interventions including:
- cross-sector secondments
- scenario-based CPD aligned with operational realities
- the development of clear AI career pathways
Strengthening leadership capacity in AI strategy, ethics, and systems thinking is critical. Long-term, collaborative efforts between Defence organisations, academia, and industry will be essential to building a skilled and sustainable AI-ready workforce equipped to meet national security needs.
Life Sciences
The Life Sciences sector is undergoing significant changes through the growing integration of AI across:
- research
- diagnostics
- Advanced Manufacturing
- healthcare
According to the Wellcome ‘Unlocking the potential of AI in drug discovery’ report, AI is increasingly used to:
- accelerate drug discovery
- automate genomic analysis
- improve diagnostic accuracy
- streamline production processes in pharmaceutical and biotech environments
These developments are supported by national strategies that position the UK as a global leader in Life Sciences innovation as outlined in the SQW ‘Data to Early Diagnosis and Precision Medicine ISCF Challenge Evaluation report’. However, while large firms and research centres are embedding AI, smaller companies and regional clusters often struggle to access the skills, infrastructure, and training required to benefit.
The sector has an urgent need for technical AI skills in areas such as:
- computational biology
- bioinformatics
- robotic process automation
- AI-supported diagnostics
Alongside these, there is growing demand for non-technical AI skills, including:
- the ability to interpret AI model outputs
- communicating findings across interdisciplinary teams
- embedding AI insights into strategic decisions
Responsible and ethical AI skills are becoming increasingly important. These include:
- transparency in data use
- understanding algorithmic bias
- compliance with regulatory frameworks
These are particularly important for patient data, clinical trials, and personalised medicine.
Despite these needs, workshop experts explained that Life Sciences employers face persistent skills mismatches and workforce gaps. Smaller firms, which make up the majority of UK Life Sciences sites, often struggle to access training that is tailored to their specific roles and operational models. Existing provision is often designed around long-form degree qualifications, which are poorly suited to technicians, mid-career professionals, and time-constrained SMEs. This is compounded by barriers such as limited access to specialist AI trainers and unclear guidance on emerging standards.
Participants emphasised that in order to support AI adoption across the Life Sciences sector, there is a need for more flexible and accessible professional development pathways, while targeting modular training that aligns with evolving job roles. Scalable apprenticeships and short courses covering AI applications in research, Advanced Manufacturing, and regulatory settings would support both new entrants and existing staff. Industry collaboration, including coordinated efforts between employers, training providers, and public agencies, will be essential to build capacity and promote inclusive growth.
These efforts should be directed towards making training available to more people, opening diverse ways to enter science, technology, engineering and mathematics (STEM) and using AI ethically and securely.