AI Skills for Life and Work: summary report
Published 28 January 2026
This report was authored by Jacob Bellamy, Jamie Douglas, James Wickett-Whyte, and Nathan Bransden at Ipsos.
This research was supported by the Department for Science, Innovation and Technology (DSIT) and the R&D Science and Analysis Programme at the Department for Culture, Media and Sport (DCMS). It was developed and produced according to the research team’s hypotheses and methods between November 2023 and March 2025. Any primary research, subsequent findings or recommendations do not represent UK Government views or policy.
Executive summary
Background
Ipsos was commissioned by the Department for Science, Innovation and Technology (DSIT) and the R&D Science and Analysis Programme at the Department for Culture, Media and Sport (DCMS) in November 2023 to conduct a research project exploring the AI skills needed for life and work in the UK. It was developed and produced according to the research team’s hypotheses and methods. Any primary research, subsequent findings or recommendations do not represent Government views or policy.
The research aimed to understand what skills are required, how these skills may transform as AI technology develops, the extent to which the UK workforce currently has these skills, and the support available to help people develop them. It also considered implications for Government, employers and private education and skills providers in addressing any gaps in provision.
This report draws together key findings from six work packages that form this research (this constitutes a summary of work package seven, which aims to support knowledge transfer) to answer the core research questions. We recognise that much has changed (and continues to change) in AI since this research was commissioned. This report therefore focuses on the broader themes and perspectives relating to AI skills and future AI skills needs, synthesising insights from economic analysis, academia, AI experts and stakeholders, employers, employees and the general public.
Methodology
The research involved a mixed-methods approach across multiple work packages:
-
A Rapid Evidence Review of 139 pieces relevant academic and grey literature
-
Surveys of the general public (1,189 respondents) and employers (801 respondents)
-
Analysis of AI-related job vacancies from 2021-2023 and US patent statistics from 2014-2023, synthesised into a skills forecast
-
A Delphi study with 22 AI experts
-
Public dialogue workshops with 45 members of the public
-
Stakeholder workshops with representatives from seven industries
-
A drivers analysis to synthesise the evidence review, surveys, job vacancies and patent analysis and Delphi study.
Key findings
AI skills for life
The majority of individuals act as consumers of AI in their daily lives, interacting with AI-powered products and services without actively providing input, while a smaller proportion are users who work with AI to produce or create something by adding their input – such as a job applicant using ChatGPT to generate a cover letter. As AI develops, it is expected to become more “user friendly” and may lower the barrier to entry for using AI-powered products.
To both consume and use AI in life, skills centre around ‘skills for understanding’, such as understanding AI risks and threats, keeping information safe and private when using AI, judging the accuracy and reliability of AI outputs, and discerning AI-generated content. While the public recognises the importance of these skills, with analysis of the public survey conducted by Ipsos showing 97% having heard of AI and 73% reporting having used or consumed AI in the past month, confidence in using them is low. Only 17% say they can explain AI in detail, 28% feel confident using AI in daily life, and 21% feel confident using AI at work.[footnote 1]
Our research suggests the lack of confidence in these skills stems from an “iceberg effect”, where underlying fears and distrust of AI drive surface-level concerns about misinformation and other risks are driven by deeper feelings of discomfort, powerlessness, and fear from AI generally. This distrust leads to a reluctance to engage with AI or to recognise that skills to understand the risks are a necessity.
As AI technologies become more ubiquitous and user-friendly, the foundational skills for understanding AI are not expected to change significantly.
AI skills in the workplace
In the workplace, skills for understanding AI underpin the ability to use the technology effectively. These skills, such as judging the accuracy and reliability of AI outputs, act as a foundation for the effective use of AI across all levels of technical expertise – as such they are common across both the skills for life and for work. However, the specific technical AI skills required vary by type of user:[footnote 2]
-
AI experts (such as AI research scientists or machine learning engineers) are creators and innovators with deep understanding of advanced AI concepts, requiring skills in data science, machine learning, and software development.
-
AI specialists (such as software developers and engineers) apply AI in their domain, needing substantial technical acumen and data-centric skills.
-
AI implementers (including business analysts and project managers) leverage AI tools and understand practical applications, requiring both technical understanding and domain knowledge.
-
General AI users work with or use existing AI tools and technologies on a less structured basis.
Educational requirements and experience levels differ across these roles. For instance, preliminary best estimate projections from Lightcast (a proprietary database of millions of current and historical job postings) in our job vacancies analysis showed that 37% of AI expert positions request a PhD and 29% a Master’s degree, while AI specialist and implementer roles have slightly lower educational requirements but higher demands for experience.[footnote 3]
Demand for AI skills is expected to grow significantly, with jobs involving core AI activities projected to rise to 12% of the workforce (around 3.9 million people) by 2035, and a broader 9.7 million in roles where AI is at least ‘adjacent’ to their work.[footnote 4] Growth is anticipated across sectors, with the largest increases in IT professional roles, research professionals, and business/finance positions. This however is more likely to be driven by existing roles having AI responsibilities added to them, than new AI specific jobs being created.
However, even with this projected growth, the vast majority of the workforce will fall into the implementer or user categories, underlining the importance of skills for understanding AI risks and interpreting outputs. It further suggests the increasing importance of these non-technical skills in effectively leveraging AI in the workplace.
Current AI skills gaps in the UK
The UK currently faces AI skills gaps among both the general workforce and AI specialists. AI expertise is highly concentrated, where preliminary best estimate projections from Lightcast in the job vacancies analysis found the majority of AI-related job vacancies in 2023 advertised in London and the South East (60% for AI experts), although there is evidence of emerging clusters in other regions and cities, although these are areas already relatively well established as tech hubs.[footnote 5] High educational requirements potentially limit access to these specialised roles while substantial salary premiums can limit SME’s ability to acquire the technical skills they need.
Among the general workforce, 56% of employers whose businesses are currently using or planning to use AI rate the level of knowledge in their business overall as ‘beginner’ or ‘novice’, and 61% of all employers have no staff currently working with AI (based on in-house analysis of the employer survey conducted by Ipsos).[footnote 6] Furthermore, business leaders face difficulties not only in identifying viable AI use cases and navigating the legal, regulatory, and ethical implications of AI implementation but also in communicating their decisions and actions related to AI to staff, stakeholders, and customers.
Skills levels vary considerably across demographics, with women less likely than men to feel confident using AI at work (16% vs 26%),[footnote 7] and uptake of AI training differing by sector. For instance, among those who have heard of AI, 34% of men feel confident in their ability to judge the truthfulness of AI outputs compared to 21% of women, and women are less likely to have undertaken AI training (89% of women vs 79% of men have not undertaken AI training).[footnote 8]
Developing AI skills: challenges and opportunities
Developing AI skills through tertiary education alone is deemed insufficient to meet the UK’s needs. While current efforts focus on university courses and doctoral training, a more comprehensive approach is needed. This should involve integrating age-appropriate AI education into primary and secondary curricula, expanding access through alternative pathways like apprenticeships and online courses, and supporting lifelong learning initiatives.
However, awareness and accessibility of existing training is low. Many employers struggle to identify relevant AI training for their businesses, hindering investment in upskilling. Half of the employers surveyed were unsure what AI training is relevant to their business, and existing training programs often lack accessibility due to limited scale and geographic coverage.[footnote 9] This highlights the need for clearer guidance and resources tailored to different industries and job roles, as well as efforts to expand the reach of training opportunities.
Implications for government, employers and skills providers
The research findings reveal AI skills gaps in the UK workforce and population, encompassing both technical skills and a broader understanding of AI and its implications. The concentration of AI skills among a small, highly educated, and well-compensated proportion of the workforce risks creating a divide in AI accessibility and adoption, underscoring the importance of adapting and targeting AI skills training to ensure it is accessible to all. While not identical, it is very similar to the inequality issues that arise in relation to the digital divide – and could risk exacerbating existing inequalities. Therefore, policy solutions will need to recognise the interdependencies of AI skills gaps and the digital divide.
Low levels of AI literacy and confidence among the general population emphasise the need for increased AI awareness and understanding to build trust and familiarity with the technology. As AI becomes increasingly embedded in everyday life and work, those lacking confidence or trust in using AI risk being left behind. Variation in AI skills levels across demographics and sectors highlights the importance of targeted policy interventions to prevent widening skills gaps and ensure an inclusive AI-driven economy.
Business leaders face challenges identifying viable AI use cases and navigating the legal, regulatory, and ethical implications of AI implementation. They require skills to navigate guidelines, frameworks, and support mechanisms for effective AI adoption. While businesses play a key role in upskilling and reskilling the workforce, they often lack the ability to identify skills needs and training options.
Our findings raise a number of questions for policymakers, in Government and beyond, which we have split into four core groups outlined below.
Table 1.1: Four core groups of questions raised by the findings from this research
| Skills development – education | Skills development – businesses and the workforce |
|---|---|
| How can the UK improve its support for AI skills development across all age groups, from primary school to lifelong learning? What specific guidance and resources are needed to effectively equip educators at all levels to teach AI-related concepts and skills? How can skills for understanding be taught to a sufficiently high level? |
How can AI regulation balance encouraging innovation and allowing specific governance frameworks to be used, while supporting adoption with overarching regulation? How can Government incentivise and support businesses to invest more in AI training for their employees? How can Government support individuals and businesses to identify the skills they need and to access relevant and high-quality training? |
| Skills development outside education and the workplace | Long term planning |
| To what extent will it be / is it desirable that individuals develop core skills for understanding AI through use alone? To what extent can skills for understanding be combined with the development of technical skills (e.g. skills around ethics and bias)? How can Government help people understand where AI will be involved in their work and life? What alternative pathways to AI skills development, beyond traditional higher education, should be explored and supported? |
How can the UK use an AI skills agenda to close, rather than exacerbate existing gaps – particularly around inclusion? How can the UK prepare its workforce for the projected growth in demand for AI skills over the next decade? How can the UK foster a culture of lifelong learning and adaptability to ensure its workforce can keep pace with the rapid evolution of AI technologies? How can government support a shift in public and business perceptions of AI? |
1. Introduction
This research was supported by the Department for Science, Innovation and Technology and the R&D Science and Analysis Programme at the Department for Culture, Media and Sport. It was developed and produced according to the research team’s hypotheses and methods. Any primary research, subsequent findings or recommendations do not represent Government views or policy.
1.1. Defining AI
While there are many ways to define artificial intelligence, for the purposes of this research, we have defined it as ‘a programme that can sense, reason, act and adapt’, with examples ranging from complex large language models, like ChatGPT, through to the autocorrect function on smartphones. This is seen in the following image, where it is compared with machine learning (like spam or junk filters on emails) and deep learning (like facial recognition technology).[footnote 10]
Figure 1.1: Defining AI, machine learning and deep learning
Through this research, it was clear that both stakeholders and the public expected these different types of AI to be a part of work and life in the future. This ranged from increased automation in the home, where, for example, energy consumption could be optimised, to the transformative potential of AI in healthcare diagnoses, through to concerns for stakeholders in the education sector in particular that education could become increasingly atomised with AI-generated lessons for individual students.
1.2. The research questions and what we did
This research was intended to answer or generate an evidence base to answer six key questions:
-
What AI-relevant skills are needed for life and work?
-
How may these transform as the technology develops?
-
To what extent does the UK have or lack these needed skills in the labour force?
-
To what extent is the UK supporting people to develop future relevant AI skills?
-
What should Government, employers and private education and skills providers focus on to address any gaps in provision?
-
What can the UK learn from international counterparts with regards to AI skills?
We took a mixed-method approach to answering these questions, with qualitative stakeholder workshops, depth interviews with experts and a public dialogue, combined with quantitative surveys of businesses and the general public. Alongside the primary research, project partners at the Alan Turing Institute, Perspective Economics, University of Warwick and the Warwick Institute for Employment Research conducted a variety of analysis, including a Rapid Evidence Review of 139 pieces of academic and grey literature, a skills forecasting that examined trends in relevant job vacancies in previous years and an analysis of different patents.
This work was split across ten focused work packages, as outlined below. There is a link to a detailed report for each work package.
Table 1.2: Summary of AI Skills for Life and Work work packages
| Work Package | Description |
|---|---|
| Rapid Evidence Review | A review of 139 pieces of relevant academic and grey literature (i.e. information not produced by commercial publishers such as research reports, blogs and policy documents) on the AI skills for work and life, focusing on the skills needed, whether the UK has these skills currently, and where there could be lessons learned from approaches in other countries. |
| Survey of the general public | A survey of 1,189 UK adults focusing on AI usage in day-to-day life, confidence in using AI, what skills are seen as the most important, and the potential impact of AI. Conducted between 29 February and 7 March 2024. |
| Survey of employers | A survey of 801 UK employers, focusing on the current AI skill levels across types of AI and business function, what training they have access to and what would be of interest to them, and the impact this would have on their recruitment and retention. Conducted between 19 March and 7 June 2024. |
| Analysis of patents | An analysis of US patent statistics from 2014 to 2023 to act as an indicator for how skills and knowledge are grouped, and what skills and knowledge are required for working with these AI technologies in different sectors. |
| Analysis of job vacancies | An analysis of UK job vacancies from January 2021 to December 2023, exploring trends, geographic distribution, job titles, roles and salaries associated with AI-related jobs. |
| Skills forecasting | An application of the technological trends from the patent and job vacancies analyses to the existing job and skills projections for the UK, from the latest Working Futures exercise. This provides an overview of how the UK labour market (and demand for AI skills) might change by 2035. (Working Futures offers a baseline projection of future employment prospects, but does not specifically incorporate AI – more information can be found at https://warwick.ac.uk/fac/soc/ier/researchthemesoverview/researchprojects/wf/) |
| Delphi study | A series of 22 in-depth interviews with experts in AI, followed up by a survey of participants, to gather insights from various experts, identify areas of agreement and disagreement, and pinpoint key areas for further research. Conducted in February 2024. |
| Public dialogue | A series of deliberative workshops with 45 members of the public taking part in a total of 13.5 hours of engagement including plenary presentations and breakout group discussions. The dialogue focused on 6 thematic areas: AI in the home and personal devices, AI in leisure and entertainment, AI in travel and transport, AI in education, AI in healthcare, and AI in work and careers. Conducted in February 2024. |
| Stakeholder workshops | A series of stakeholder workshops with representatives from seven industries, attending 3-hour online workshops to explore current and potential uses of AI in their industries, the skills required to support AI development, and the policy options recommended to support this. |
| Drivers analysis | A synthesis of elements of the research that had been carried out (including the rapid evidence review, patent analysis, general public and employer survey, job vacancies analysis and Delphi study of AI specialists). The analysis focuses on current trends in AI skills, identifying and prioritising key trends and patterns and different drivers of change. |
This report draws together key findings from across the project and the various work packages that sat within it. The findings from the work packages have been used to answer the core research questions.
2. What AI-relevant skills are needed for life and work, and how may these transform as the technology develops?
2.1. Key findings
-
Skills for understanding how to interpret and evaluate the outputs of, and risks around, AI systems underpin the AI skills for work and life. These are also the skills that individuals highlight as most important to them, including:
-
Understanding the risks and threats associated with AI systems
-
Being able to keep information safe and private when using AI
-
Judging the accuracy/reliability of AI outputs
-
Discernment, or determining what is AI generated and what isn’t
-
-
Individuals feel these skills are important due to low trust in AI: Making individuals more comfortable with AI through learning or exposure is key to promoting skills development.
-
Employers are currently focused on providing training related to using AI, and are not focusing enough on skills for understanding:
-
Employer incentives to train or develop skills tend to focus on skills for delivery/use. These are also the skills they think are less prevalent among their staff.
-
As such, there is a gap in the provision of skills for understanding. Focusing on embedding these skills at early educational levels may help build trust in AI, and better equip people to use it in the future.
-
-
AI skills for life are expected to change less than those for work:
- Individuals will mostly interact with AI as consumers. AI-powered technology is expected to become more user-friendly, lowering the “barrier” to use.
-
In work, there will be increased demand for AI skills as the technology continues to grow:
- This research suggests jobs involving AI activities could rise to around 3.9m (12% of the current workforce) by 2035. A broader group of 9.7 million people may be in AI-related occupations. This however is more likely to be driven by existing roles having AI responsibilities added to them, than new AI specific jobs being created.
-
Sectors with the highest growth point to demand for high-end, technical skills – but the largest nominal increase in jobs involving AI will be among implementers:
-
There will be increased demand for highly technical skills such as those associated with programmers and software developers.
-
However, the largest nominal category of AI users will be implementers. Implementers leverage AI tools and understand the practical applications of AI in their field and business processes. This suggests that non-technical skills, such as understanding the risks of AI implementation and roll-out, may become more important. These are also skills which the private sector has less incentive to provide training in due to positive externalities, as trained individuals may move to other firms.
-
2.2. Most individuals will be both consumers and users of AI
Individuals can either be users of AI, consumers of AI, or both. We can categorise users and consumers as follows:
-
Users: Those who work with AI to produce or create something by adding their own input. This includes a broad range of activities, from machine learning to Large Language Models (LLMs). For example, when using ChatGPT to write a covering letter when applying for a job, an individual must add their own inputs (a prompt, and any follow-on prompts asking for changes or amends). ChatGPT will not produce a desired output without any inputs from a user.
-
Consumers: Those who use AI-powered products and services but do not actively add their own input. This could include AI-powered recommendation systems or reading news generated by AI. For example, when individuals browse their recommendations on a streaming service, they are not actively adding their own inputs, even if AI drives the recommendation algorithm. Their service will then present an individual with recommendations without their asking it to. As such, individuals may be consumers of AI without realising it.
Some individuals will interact with AI as users, but all will interact as consumers. As such, the skills that individuals will need as consumers of AI are more universal. These skills also underpin the use of AI. We refer to these skills as “skills for understanding”.
2.3. Skills for use vs skills for understanding
Much like the split between users and consumers, we can separate AI skills for use and those for understanding. We categorise them as follows:
-
Skills for use: Skills that enable individuals to use AI systems to achieve a goal or output.
-
Skills for understanding: Skills that enable individuals to interpret and evaluate the outputs of AI systems.
As such, all individuals will need skills for understanding, even to use AI effectively and confidently.
The following illustrates what these skills for understanding look like. This list is not comprehensive, but does define some of the key categories (originally mapped against core skills identified in the digital skills framework, but translated to use of AI):
-
Understanding the risks and threats associated with AI systems. This skill involves being aware of potential risks and threats that AI systems may pose, such as biased outputs or systems being used maliciously. It includes understanding how AI models can be misused by bad faith actors, or have unintended negative impacts, such as on the environment. Building this skill enables users to think critically about AI and identify potential issues.
-
Being able to keep information safe and private when using AI. This skill includes existing digital skills translated to the AI context - for example, keeping passwords for AI services safe and private. It also includes understanding how AI services, like LLMs, use data. For example, understanding that any interaction with AI services can produce a record which others may be able to access.
-
Judging the accuracy/reliability of AI outputs. This skill has two parts. First, understanding that not all content generated by AI is reliable. Second, be able to evaluate what content generated by AI may, or may not be, reliable.
-
Discernment, or determining what is AI generated and what isn’t.[footnote 13] Someone needs to be able to discern what is AI-generated to know when to consider risks, safety, and accuracy.
These skills for understanding form the basis of related skills for use. For example, an individual first needs to know that AI may generate unreliable content. They then need to be able to determine what is unreliable and what isn’t. The third step, which would constitute a skill for use, would be knowing how to use AI systems to generate better, more reliable results. In this way, skills for understanding act as a foundation for skills for use.
Key skills for understanding
-
Understanding the risks and threats associated with AI systems
-
Being able to keep information safe and private when using AI
-
Judging the accuracy / reliability of AI outputs
-
Discernment – what is AI generated, and what isn’t?
2.4. What types of skills are required for life?
Individuals are increasingly interacting with AI in day-to-day life, increasing the importance of skills for understanding
The integration of AI into daily living has increased significantly in recent years. Almost all individuals (97%) have heard of the term artificial intelligence, and three-quarters (73%) believed that they had used or consumed AI at least once in the past month[footnote 14] (General Public Survey). Most of the day-to-day interaction with AI is as a consumer (see Figure 2.1). There is an expectation that this type of interaction with AI will increase in use over the 2-to-5-year horizon, as AI tools and technologies become more ubiquitous.[footnote 15] As such, the skills required for AI in life tend to be interpreting and making judgments about AI outputs and applications, i.e. skills for understanding rather than skills for use.
Figure 2.1: AI tools used at least once in the past month
Base: all who have heard of AI (n=1155, Survey of the general public)
Individuals recognise these skills for understanding as important, but lack confidence in using them
People identify the skills related to understanding as the most important when considering the use or consumption of AI in life. Figure 2.2 maps relative importance of AI skills for life against relative confidence.
Figure 2.2: Importance of AI skills in life vs confidence
Base: all who have heard of AI (n=1155, Survey of the general public)
The bottom right-hand corner, which shows skills which the public feel are important but have low confidence in, are skills for understanding. Skills for use, such as being able to communicate with AI systems or solve problems with them, are considered less important in life, and have relatively high confidence among the general public.
This lack of confidence in key skills for understanding stems from low trust in AI, which in turn is driven by underlying fears
In our wider polling, we have an “iceberg effect” with perceptions. Surface-level reactions include perceptions such as lack of quality and misinformation – two-thirds of Britons are nervous about products and services using Artificial Intelligence,[footnote 16] and tend to think AI is more of a risk than an opportunity to wider society. Underneath, we hear strong emotions of discomfort and powerlessness. We think this is driven by a deep underlying fear of AI. Across our Public Dialogue, we found initial perceptions of AI to be fatalist. This revolved around apocalyptic impacts on humanity, with individuals referencing the “terminator” outcome (one in which humans are wiped out by AI, based on the film series).[footnote 17]
This distrust in AI is reflected across the research conducted within this project – including the literature review, public and professional surveys, and deliberative workshops in this research. Distrust will have a significant impact not only on citizens’ acceptance of AI-based products and services, but also on their willingness to provide data for commercial and public sector AI-based products and services. Without trust and with broader fear of AI, individuals express a reluctance or discomfort in engaging with AI. This restricts their ability to build AI skills.
Figure 2.3: Surface-level distrust in AI is driven by underlying fears
2.5. How may these life skills transform as the technology develops?
AI skills for life are not expected to change much as the technology develops
Skills for understanding, particularly those associated with risk and interpretation, may become more prevalent as AI technology grows. However, the nature of these skills is not expected to change as much. This is because:
-
The majority of individuals acting as consumers of AI in life, rather than users. The survey of the general public for this research found around a third of individuals expect to use AI in their daily life in the next 12 months.[footnote 18] These skills are generally more “resistant” to changes in the underlying technology. For example, making judgements about the outputs of AI requires the same set of key skills irrespective of changes in the underlying AI technology.
-
The Public Dialogue and Delphi study found that AI-powered technologies are expected to become more “user-friendly” as the technology develops.[footnote 19] This means that among those using AI actively in their daily life, the barrier to entry for using AI-powered products may get lower. This is particularly true for areas where AI changes how services work in the back end but did not affect the front end (such as apps, personal devices, or websites).
2.6. What type of skills are required for work?
Skills for understanding underpin AI in work, but skills for use are needed to take the next step
Skills for understanding act as a foundation for AI skills in work. As noted in section 2.3, many user-based skills can depend on some understanding of AI and its outputs. For example, in order to take active steps to ensure better data privacy and security when using an LLM, individuals must be aware that others may be able to access any data shared with these systems. These skills are most closely related to trust, and trust underpins willingness to use AI. In our General Public Survey, individuals who were not confident in key skills for life tended to use AI less at work.[footnote 20] In the Public Dialogue, feelings of misunderstanding AI, or low AI use, drove concern and mistrust.[footnote 21]
However, most individuals encountering AI will require skills to interact with AI systems. These skills for using AI systems range significantly from high-level machine learning to prompt writing. Across this research, specifically the Vacancies[footnote 22] and Drivers Analysis,[footnote 23] we have mapped these skills against the type of user. These users are:
-
AI experts: Creators and Innovators in AI, with a deep understanding of advanced AI concepts. Examples include AI Research Scientists, ML Engineers, and AI Architect. As per the Vacancies Analysis,[footnote 24] the predominance of roles within this category like Data Scientist (69% of job postings) and Software Engineer (20%) shows the core skills lie in data science, machine learning, and software development. Education requirements for these roles tend to be high - with 37% of roles requesting a PhD and 29% a Master’s degree. However, the relatively low experience requirements, with 67% of roles seeking under 3 years’ experience, may reflect the novelty of the field and demand for emerging talent.
-
AI specialists: While not as deeply technical as expert roles, AI specialists need substantial technical acumen to apply AI in their domain. Software Developer/Engineers also constitute a large portion of these roles (21%), followed by data-centric titles like Data Scientist, Data Engineer and Data Analyst - indicating a broader church with more data-centric skills than among AI experts. Education requirements are still high, with 93% requiring at least a Bachelor’s degree and 27% a PhD. Experience demands are also greater than for expert roles, with 44% seeking over 4 years’ experience, suggesting these roles apply AI in a more established business context than cutting-edge research.
-
AI implementers: This group leverages AI tools and understands the practical applications of AI in their field and business processes. Implementers cover a very broad range of roles, with examples such as Business Analysts, Project Managers and Consultants. This variety reflects the need for both technical understanding and domain knowledge to effectively leverage AI tools in diverse contexts. While education requirements are slightly lower than for specialists (91% Bachelor’s or above), they are still substantial – although education requirements are less focused on data and software skills. The experience levels are comparable to specialist roles, with 40% requiring over 4 years’ experience.
-
AI users: Those with skills to work with or use existing AI tools and technologies. All of the above categories are ultimately users of AI, although different roles will emphasise different skills. Those who sit exclusively within the ‘AI user’ category use AI on a more limited, less structured basis. Most of their AI use will be through generating outputs from LLMs and other AI products.
We can think of the types of AI users as a pyramid – seen in Figure 2.4 – with less common skills at the top, and more common skills at the bottom. Even then, the vast majority of those in work will fall into the implementers / users category (only around 0.6% of current UK job postings are for expert or specialist roles).
Figure 2.4: Types of AI user
Different roles will need different skills. For example, AI experts may require complex neural network skills to perform their role, while AI users who are not experts or specialists will need a set of skills centred on using AI to help with their existing tasks, such as analytics or project management. However, the skills that link everyone who uses AI professionally are the skills for understanding. For example:
-
A machine learning expert, looking to mitigate “hallucination” within an AI model, will need to be able to judge accuracy and reliability of AI outputs to determine the effectiveness of their amends.
-
A data scientist, using AI to analyse patient healthcare data, needs to understand how AI systems keep, process and store patient data.
-
A consultant, looking to use AI to conduct a literature review, will need to understand biases and risks associated with LLMs to better guide their prompts.
The different skills for use that are particularly important for different roles are shown in Figure 2.5.
Figure 2.5: Different skills for use mapped against type of role
Among those using AI in work, employers rate their staff’s competency in key AI skills as higher than employees do
Figure 2.6 compares the percentage of workers who feel confident in key AI skills vs the percentage of employers who would rate their staff as “good” or “very good” in these areas. The biggest discrepancies are in the skills for understanding – these are skills that employers are generally confident their employees have, but that individuals feel uncertain about. For example, around 6 in 10 employers rate their staff’s ability to keep information secure and private while using AI as “good”, while only 2 in 10 employees feel confident using this skill.
Figure 2.6: How would you rate the staff with technical or non-technical AI skills across the following areas? % good vs How confident do you feel in your ability to use these skills in your day-to-day life? % confident
Employers are currently focused on training related to using AI, and under-providing training on judging accuracy and reliability of its outputs
Figure 2.7 maps the percentage of employers who have provided training in a skill vs the percentage of employees who think that skill is important when using AI in work. This data has been normalised, which makes the largest value 100% and smallest 0%, allowing us to compare the relative importance of these skills. The bottom-right quadrant highlights skills that employees feel are important when using AI in work but for which employers are providing relatively little training. Judging accuracy and reliability is the skill in this quadrant, highlighting a lack of focus on training in this matter. This feeds back into a broader narrative about trust and confidence as a key underlying factor, even for those in work. And we know that the majority of those in work will not be using highly technical skills – these skills have the broadest base of usage.
Employer incentives to train or develop skills tend to focus on skills for delivery and use. The Employers Survey showed that among the top four skills businesses are providing training on, three were related to use (being able to communicate with AI systems, being able to share and use the outputs of AI systems, and being able to solve problems with AI systems).[footnote 25] These are also the skills they think are less prevalent among their staff. As such, we might expect the gap in provision around judging accuracy and reliability to persist even as AI usage grows.
Figure 2.7: Percentage of employers who have provided training in a skill (normalised) vs the percentage of employees who think that skill is important when using AI in work (normalised)
2.7. How might these work skills transform as the technology develops?
The demand for AI skills in work is expected to grow significantly
The expansion of AI-based products and services has led to an expansion in demand for AI skills. Some evidence highlighting this growth from across this research include:
-
The proportion of new patents filed in the US relating to AI more than quadrupling between 2014 and 2023, from 5.2% to 20.3%.[footnote 26]
-
More than one in five (21%) of employers saying their demand for AI skills will increase in the next 12 months, compared to 3% saying it will decrease.[footnote 27]
And this growth is expected to continue. Jobs directly involving AI activities could rise from 158,000 in 2024 to 3.9 million by 2035 according to this research’s skills projections; about 12% of the current UK workforce of 30.4 million.[footnote 28] Most of this will not be new jobs, but changes in the nature of existing jobs, and nearly all of this growth will be in white-collar, non-manual work. A broader population of 9.7 million could be in roles where AI is at least “adjacent” to their work. These are roles where working with AI is not a core skill, but one that may influence or irregularly affect work.[footnote 29]
The changing nature of AI skills for work depends on sector and type of AI user
We are already seeing increased demand for skills among AI experts and specialists, such as those associated with programmers and software developers.[footnote 30] However, there will be broader impacts of the expansion of AI in other areas, and not all sectors and types of AI user are expected to grow at the same rate. Sectors with the largest nominal growth include:
-
IT professionals like programmers and software developers, IT specialist managers, and IT business analysts and systems designers
-
Research professionals and Business / Finance roles, like financial analysts, management consultants, and business project managers
The largest proportion of this growth is expected to be among AI implementers. This suggests that non-technical skills, such as understanding the risks of AI implementation and roll-out, may become more important. These are also skills which the private sector has less incentive to provide training in, as discussed in chapter 1.
It is also worth noting that while some jobs may shift toward AI, the broader profession may still see job declines overall. Our skills forecasting indicates most AI employment will be in Professional and Associate Professional roles, but some unit groups within these categories, like Finance Managers and Business Sales Executives, are projected to have overall job reductions even accounting for AI growth. These projections reflect a combination of digitalisation and organisational streamlining in these areas, as well as the adoption of AI tools that reduce the need for middle-management oversight and traditional sales functions. Niche AI skills alone may not insulate jobs from broader occupational declines.
3. Does the UK have or lack the needed skills in the labour force?
3.1. Key findings
The UK workforce faces significant AI skills gaps, both in technical expertise and broader understanding of AI’s implications. Key findings include:
-
Concentration of AI skills.
The UK’s AI workforce is currently small, with expertise concentrated among a select few industries and locations. High educational requirements may limit entry to AI roles and access to technical AI skills, while salary premiums may limit SMEs’ ability to acquire the technical skills they need.-
Low AI literacy and confidence.
The general workforce lacks skills for understanding and using AI safely, with low levels of AI literacy and confidence common among both the public and employees. -
Identifying use cases and skills needs.
Business leaders face difficulties in identifying viable AI use cases and understanding the legal, regulatory, and ethical implications of AI implementation. -
Variation in AI skills.
AI skills levels vary across demographics and sectors, with a heightened risk of skills gaps widening rapidly in some parts of the workforce.
-
The findings emphasise the need for targeted policy interventions, close collaboration between education, training providers, and industry, and the development of comprehensive skills standards covering both technical and non-technical AI skills to address the AI skills gap and support the workforce in the transition to an AI-driven economy.
3.2. AI skills among the general workforce
Our research reveals a lack of skills for understanding AI and using AI safely and responsibly. This is against a background of low levels of AI literacy and AI confidence, which is common to both the general public and the general workforce. Our General Public Survey[footnote 31] shows:
-
only 17% of UK adults say they can explain AI in detail
-
28% of the UK public feel confident in using AI in their daily life
-
21% of those in work feel confident using AI in the workplace.
Qualitative research with the public, employers and employees suggests AI literacy, confidence, use and understanding are intricately linked and mutually reinforcing. While AI literacy and AI confidence may not be considered ‘skills’ as such, our findings suggest they are key factors in how individuals relate to AI technologies, AI adoption, and ultimately the acquisition and development of AI skills.
Employees
According to our General Public Survey (29 February to 7 March 2024), three-quarters (73%) of employees report not having used AI in the past month, and only a small proportion feel confident about keeping their information safe and private while using AI (19%) or about being able to register and apply for AI services or training (23%) in their day-to-day life.[footnote 32] Similarly, in our Employer Survey, 56% of employers whose businesses are currently using or planning to use AI rate the level of knowledge in their business overall as ‘beginner’ or ‘novice’, and 61% of all employers have no staff currently working with AI.[footnote 33]
The low levels of AI use and low confidence in interacting with AI suggest the majority of people in the workforce have limited opportunities to become more familiar with the technology. Consequently, they feel they struggle to acquire the necessary AI skills to use or adopt AI effectively, safely and responsibly, both in and out of the workplace.
Business leaders
Employers also have a key role to play in providing AI skills training and upskilling or reskilling opportunities for their workforce. To do this effectively, employers need the skills to be able to identify the AI skills needs of the business, identify and deliver appropriate training, and evaluate the quality and cost-effectiveness of training.
According to our Employer Survey, senior leadership teams have a good understanding of the overall skills needs of their teams (77% of employers rate it as ‘very good’ or ‘fairly good’).[footnote 34] However, only six in ten (62%) employers whose business is currently using AI feel their senior leadership team has a good understanding of how AI is being used within their business. In addition, only one-third of employers rate as ‘good’ their senior leadership teams’ understanding of how AI is being used in their industry or sector (36%), and their ability to identify new opportunities to use AI within the business (34%). This suggests there is a perception among employers that their senior leadership teams are not well equipped to evaluate how AI is being used within their firm, nor to understand or exploit new AI opportunities.
In our Stakeholder Workshops, business leaders report challenges in identifying viable AI use cases.[footnote 35] As a result, they also report challenges in understanding the relevant legal, regulatory and ethical factors they need to consider, and how to communicate their decisions or actions to staff, stakeholders and customers. The lack of such skills may lead businesses to run afoul of laws or frameworks around data privacy, data protection or copyright, and so risk damaging their reputation or losing out to competitors. These risks may in turn lead businesses to adopt a risk-averse approach to AI adoption, potentially losing out on the benefits of AI (such as greater productivity or efficiency) and limiting opportunities to develop AI skills on the job.
These findings emphasise the importance of developing non-technical skills for understanding the use, benefits, limitations, risks and implications of AI. These skills are particularly important for businesses who may not have or need technical AI expertise, but who nonetheless could benefit from adopting AI technologies. This highlights the need for guidance and examples for their particular industry, so that successful approaches can be shared and learned from.
Variation in AI skills among the general workforce
The level of AI skills is not uniform across the workforce, but varies by many demographic and firmographic factors (such as the size or sector of the firm). Such variation is not necessarily unique to AI skills. However, the pace of AI change and its anticipated transformational impact on society and the economy mean there is a heightened risk of AI skills gaps widening rapidly in some parts of the workforce more than others, such as digitally and economically excluded groups. This in turn has implications for how ready or willing parts of the workforce are to adopt AI, as shown in our Rapid Evidence Review.[footnote 36]
According to our Employer Survey, almost half (45%) of employers expect their business model will rely on or at least use AI in the next 3 to 5 years, with expectations highest in sectors like ‘information and communications’, ‘professional, scientific and technical’ and ‘business services’.[footnote 37] Interest in AI training also tends to be higher in these sectors than others. This suggests that interest in AI training is related to expected need, and highlights the importance of raising awareness of the potential uses of AI in all sectors. For example, among employers who are interested in undertaking AI training in future, using AI to reduce repetitive tasks and improve business efficiency is the most popular area (chosen by 86%). This was also commonly highlighted as a key benefit of AI by stakeholders from various sectors in our Stakeholder Workshops.[footnote 38]
Our General Public Survey has also shown that AI confidence and uptake of training differ by gender. For example, among those in work, men are more likely than women to feel confident in their ability to use AI in the workplace (26% of men say they are confident versus 16% of women).[footnote 39] Similarly, among those who have heard of AI, 34% of men feel confident in their ability to judge the truthfulness of information provided by AI systems, compared with 21% of women. At the same time, women are less likely to have undertaken AI training. Among those who have heard of AI, nine in ten (89%) women have not undertaken AI training, compared to 79% of men. The types of training that those aware of AI want to have also differs, with women being more likely than men to want training on understanding what AI is (34% versus 26%), while men are more likely than women to want training on how to use AI for automation (35% versus 25%).
These findings highlight the variation within the UK workforce in terms of baseline levels of knowledge and awareness of AI, skills needs and desire for training, and confidence in engaging with AI and its outputs. This in turn emphasises the importance of interventions targeted at overcoming the specific challenges in different parts of the workforce. Without these, the UK risks widening the current skills divides and limiting the market’s potential to adopt AI technologies.
3.3. AI skills among the AI workforce
The UK’s AI workforce skills landscape is currently characterised by a concentration of expertise among a relatively small proportion of the total UK workforce. With AI poised to transform industries across the economy, the current concentration of skills among a select few risks creating bottlenecks and limiting the nation’s ability to fully capitalise on AI’s potential.
As of 2024, our labour market projections estimate approximately 158,000 people are employed in expert, specialist, and implementer roles combined, representing a very small proportion of the overall UK labour force (c. 0.5%).[footnote 40] According to our Job Vacancy Analysis, vacancies for expert, specialist and implementer AI skills are similarly low as a proportion of total job postings, with AI-related job postings making up c.1.7% of all UK job postings between 2021 and 2023.[footnote 41] However, focusing on the AI workforce reveals evidence for growing demand, and a widening supply gap, which is also reflected by our Rapid Evidence Review.[footnote 42]
Our Job Vacancy Analysis shows that AI vacancies have high educational requirements and substantial salary premiums across all UK regions.[footnote 43] Virtually all AI expert roles (99%) required at least a Bachelor’s degree, with 37% requesting a PhD and 29% a Master’s degree. In terms of salaries, median advertised salaries for AI expert roles had a 42% premium over wider IT roles, with the premium for AI specialists and AI implementers being 25% and 14% respectively. The pace of AI innovation may be partly driving this, with specific technical skills becoming obsolete within 3 years as AI technologies evolve increasingly quickly, according to our Rapid Evidence Review.[footnote 44] In fact, in our Job Vacancy Analysis, 67% of AI expert vacancies request less than 3 years’ experience (where known), potentially suggesting a race for foundational technical AI skills that can be adapted, developed and honed on the job for specific applications.[footnote 45]
Demand for technical AI skills varies by region, sector and skill type, as evidenced by our Job Vacancy Analysis, and our Rapid Evidence Review[footnote 46] shows that women and minorities remain underrepresented in AI fields, highlighting differences among the AI workforce when compared to the general population. According to our Job Vacancy Analysis, the majority of AI-related vacancies in 2023 were advertised in London and the South East (60% for AI experts), although there is evidence of elevated recruitment activity in certain regions (such as the East of England, North West, South West, Scotland, and Yorkshire and the Humber) and in certain cities (such as Cambridge, Bristol, Oxford, Manchester, and Reading).[footnote 47] Key sectors demanding AI skills include the wider IT and tech sector (for example, programming and data science roles), as well as Education, Finance, and Consulting. In terms of skill type, top skills for AI experts include knowledge of Python (68% of AI expert vacancies), Data Science (64%), Machine Learning (63%), SQL (29%), and cloud platforms like AWS (18%) and Azure (11%).
Overall, our research on AI skills among the AI workforce highlights strong demand for technical AI skills. At the same time, variation in demand by region, sector and skill type emphasises the importance of close collaboration between education, training providers and industry to ensure sufficient AI skills development that is aligned to real-world needs. A key challenge is ensuring AI skills do not become overly concentrated in certain regions, sectors or types of business, to facilitate access to talent.
4. Is the UK supporting people to develop relevant AI skills?
4.1. Key findings
While there is a current focus on tertiary education, primarily through university courses and doctoral training, it’s deemed insufficient to address the broader societal needs. Several key gaps and challenges emerge:
-
Broaden the skills agenda in education.
The emphasis on tertiary education should be extended to include foundational learning at primary and secondary school levels. This will help instil a level of basic AI literacy in the wider population. -
Overemphasis on work skills.
The current focus, potentially as a result of market forces driving the focus, prioritises AI skills for employment. While this is understandable, considering the relative recency of AI and the need for an accompanying skillset, it highlights the need to consider the broader societal implications of AI and the need for general AI literacy among all citizens. -
Limited awareness and accessibility of training.
While training opportunities exist, awareness among employers and individuals is low. Many employers struggle to identify relevant AI training for their businesses, hindering investment in upskilling. Furthermore, existing training programs often lack accessibility due to limited scale and geographic coverage. -
Regional disparity.
Demand for AI skills (and especially expert AI skills) is concentrated in London and the South East, creating regional inequality in access to opportunities and potentially exacerbating existing economic imbalances. This necessitates varied support levels across different regions to address specific needs. -
Declining employer investment.
Despite the recognised need for upskilling, investment in employer-provided training is declining, further widening the skills gap.
4.2. The UK is currently focused on skills in tertiary education
We found in the Rapid Evidence Review for this research, that the UK’s current focus on developing AI skills within tertiary education (such as university courses in data science and AI or funding for Centres for Doctoral Training), is a crucial first step[footnote 48]. However, it is not sufficient to address the widening skills gap across all levels of society. While investment in university courses and doctoral training programs is necessary, equally necessary is the foundational learning needed at primary and secondary school levels. This tertiary focus also overwhelmingly prioritises skills for work, overlooking the broader societal implications and the need for widespread AI literacy.
To move beyond the current focus on tertiary education, the UK needs a more comprehensive, multi-pronged approach based around three core actions – outlined below and in Figure 4.1.
-
Early Education Intervention.
Introduce age-appropriate AI education in primary and secondary schools. This could involve integrating AI concepts into existing STEM curricula, developing dedicated AI modules, and providing teachers with the necessary training and resources (a key demand from stakeholders in the education sector in our Stakeholder Workshops[footnote 49]). While the rapid evolution of AI presents a challenge for curriculum development, other countries, particularly in North America and Asia, have demonstrated that this is achievable. -
Broadening Access.
Expand AI education and training opportunities beyond tertiary institutions. This includes promoting alternative pathways such as apprenticeships, vocational training programs, and online courses. These alternatives can cater to a wider range of learners, including those who may not follow the traditional university route. Addressing regional disparities in access to AI learning opportunities is also crucial. -
Lifelong Learning Initiatives.
Support lifelong learning in AI to ensure that the workforce can adapt to the evolving demands of this rapidly changing field. This could involve government-funded upskilling programs, employer incentives for employee training, and accessible online resources for continuous learning.
Figure 4.1: Three core actions to broaden the skills focus in UK
4.3. Training is available but awareness is low and employers struggle to navigate it
While there’s a growing consensus that training and upskilling are crucial for navigating an AI-driven future, awareness and accessibility of relevant training remain significant hurdles for both individuals and employers. Although employer-provided training exists, we found in this research’s Rapid Evidence Review that this training is not sufficiently comprehensive and investment in training is declining.[footnote 50] This is underlined by findings in the Employer Survey for this research that only 11% of employers have had staff undertake training in the last 12 months.[footnote 51] This leaves a gap in preparing the workforce for the demands of an AI-integrated work environment.
When looking at the barriers to taking up training in the Employer Survey carried out as part of this research, alongside more typical barriers to training, like cost and time, a lack of clarity was highlighted.[footnote 52] Half (50%) of employers were unsure what AI training was relevant to their business. This uncertainty prevents many employers from investing in upskilling, even when they recognise the importance of AI and highlights the need for clearer guidance and resources tailored to different industries and job roles.
Lifelong learning is paramount in keeping pace with the rapid advancements in AI, impacting both personal and professional spheres. We found in the Rapid Evidence Review that initiatives like skills partnerships between employers and educational providers offer a promising approach to continuous learning.[footnote 53] They are currently limited in scale and geographic reach, but expanding these programs and ensuring wider accessibility would help foster a culture of continuous learning and adaptation to AI’s evolving landscape. This includes providing more accessible and geographically distributed training opportunities, as well as looking beyond traditional educational models to incorporate alternative pathways like apprenticeships and online courses.
4.4. There is already a level of regional disparity developing in demand for AI
The increasing demand for AI skills is creating a growing regional disparity, particularly concentrated in London and the South East of the UK and particularly for expert AI skills. However, emerging clusters are evident in other areas such as Cambridge, Bristol, Oxford, Manchester, and Reading. Notably all these locations have an existing tech industry established in their local economy.[footnote 54]
This uneven distribution presents challenges for regional equality and fairness as AI-related opportunities arise, potentially exacerbating existing economic imbalances. Combined with the varied level of AI literacy found in the Public Survey conducted for this research,[footnote 55] this suggests the level, and type, of support may need to vary across different regions to maximise the potential benefits of AI – and avoid certain regions being left behind.
5. Implications for Government, employers and skills providers
The research findings highlight gaps in AI skills in the UK workforce and amongst citizens, encompassing both technical AI skills and a broader understanding of AI and its implications. Whilst some of the technical skill requirements are not yet clear (in contrast to digital skills, for which there is a clear existing framework[footnote 56]) the research does identify a need for skills for understanding. In particular these skills focus on understanding the risks and threats associated with AI systems, how to keep information safe and private when using AI, how to judge the accuracy and reliability of AI outputs and how to determine what is AI generated. However the skills demands develop in the coming years, these skills will be essential.
The concentration of AI skills among a small proportion of the workforce, coupled with high educational requirements and salary premiums, risks creating a divide in AI accessibility and adoption regionally and across the economy in terms of sector and size of firm. This underscores the importance of adapting and targeting AI skills training to ensure it is accessible to all parts of the workforce. While not identical, it is very similar to the inequality issues that arise in relation to the digital divide – and could risk exacerbating existing inequalities. Therefore, policy solutions will need to recognise the interdependencies of AI skills gaps and the digital divide
Low levels of AI literacy and confidence among the general population emphasise the need for increased AI awareness and understanding to build trust and familiarity with AI technologies. As AI becomes increasingly embedded in everyday life and work, those with low confidence or trust in using AI are at risk of being left behind. Furthermore, the variation in AI skills levels across demographics and sectors highlights the importance of targeted policy interventions to prevent widening skills gaps and ensure an inclusive AI-driven economy.
Business leaders face challenges in identifying viable AI use cases and navigating the legal, regulatory, and ethical implications of AI implementation. They require the skills to navigate guidelines, frameworks, and support mechanisms to effectively adopt and utilise AI technologies. Additionally, businesses have a key role in upskilling and reskilling the workforce, but often lack the skills to identify skills needs and navigate training options.
Technical AI skills are expected to change rapidly as AI technologies develop, with demand expected to outstrip supply despite the UK education system’s focus on technical skills in tertiary education. There is also high demand for skills in understanding AI risks and interpreting AI outputs, which are considered highly important but often overlooked in formal training. These skills are expected to be more resilient to technological advances and therefore less likely to change over time, but individuals and businesses often find it difficult to identify and access the necessary training.
A more comprehensive approach than focusing on tertiary education is needed to meet the UK’s skills needs. Alongside the integration of AI education into primary and secondary curricula, expanding access through alternative pathways like apprenticeships and online courses, and supporting lifelong learning initiatives. This is particularly important for developing skills for understanding AI, which are foundational for the effective use of AI across all levels of technical expertise.
Our findings raise a number of questions for policymakers, in Government and beyond, which we have split into four core groups outlined below.
Table 5.1: Key questions for Government, employers and skills providers
| Group | Key questions |
|---|---|
|
Skills development – education |
How can the UK improve its support for AI skills development across all age groups, from primary school to lifelong learning? What specific guidance and resources are needed to effectively equip educators at all levels to teach AI-related concepts and skills? How can skills for understanding be caught to a sufficiently high level? |
|
Skills development – businesses and the workforce |
How can AI regulation balance encouraging innovation and allowing specific governance frameworks to be used, while supporting adoption with overarching regulation? How can Government incentivise and support businesses to invest more in AI training for their employees? How can Government support individuals and businesses to identify the skills they need and to access relevant and high-quality training? |
|
Skills development outside education and the workplace |
To what extent will it be / is it desirable that individuals develop core skills for understanding AI through use alone? To what extent can skills for understanding be combined with the development of technical skills (e.g. skills around ethics and bias)? How can Government help people understand where AI will be involved in their work and life? What alternative pathways to AI skills development, beyond traditional higher education, should be explored and supported? |
|
Long term planning |
How can the UK use an AI skills agenda to close, rather than exacerbate existing gaps – particularly around inclusion? How can the UK prepare its workforce for the projected growth in demand for AI skills over the next decade? How can the UK foster a culture of lifelong learning and adaptability to ensure its workforce can keep pace with the rapid evolution of AI technologies? How can government support a shift in public and business perceptions of AI? |
-
Fenton et al (forthcoming), AI Skills for Life and Work: General Public Survey Findings, a report prepared by Ipsos for DSIT and DCMS ↩
-
Donaldson et al (forthcoming), ‘AI Skills for Life and Work: Job Vacancy Analysis,’ a report prepared by Ipsos for DSIT and DCMS ↩
-
Donaldson et al (forthcoming), ‘AI Skills for Life and Work: Job Vacancy Analysis,’ a report prepared by Ipsos for DSIT and DCMS ↩
-
Donaldson et al (forthcoming), ‘AI Skills for Life and Work: Job Vacancy Analysis,’ a report prepared by Ipsos for DSIT and DCMS ↩
-
Donaldson et al (forthcoming), ‘AI Skills for Life and Work: Job Vacancy Analysis,’ a report prepared by Ipsos for DSIT and DCMS ↩
-
Fenton et al (forthcoming), AI Skills for Life and Work: Employer Survey Findings, a report prepared by Ipsos for DSIT and DCMS ↩
-
Fenton et al (forthcoming), AI Skills for Life and Work: General Public Survey Findings, a report prepared by Ipsos for DSIT and DCMS ↩
-
Fenton et al (forthcoming), AI Skills for Life and Work: General Public Survey Findings, a report prepared by Ipsos for DSIT and DCMS ↩
-
Fenton et al (forthcoming), AI Skills for Life and Work: Employer Survey Findings, a report prepared by Ipsos for DSIT and DCMS ↩
-
https://khalidgharib.medium.com/what-is-machine-learning-4e36563f97c9 ↩
-
Only 17% of adults reported that they can often or always recognise when using AI (rapid evidence review) ↩
-
Fenton et al (forthcoming), AI Skills for Life and Work: General Public Survey Findings, a report prepared by Ipsos for DSIT and DCMS, (henceforth ‘Ipsos AI Skills General Public Survey’) ↩
-
Seaman et al (forthcoming), ‘AI Skills for Life and Work: Public Dialogue Findings,’ a report prepared by Ipsos for DSIT and DCMS, (henceforth ‘Ipsos AI Skills Public Dialogue’) ↩
-
https://www.ipsos.com/en-uk/people-are-uncomfortable-ai-replacing-human-judgement-high-stakes-decisions ↩
-
Ipsos AI Skills Public Dialogue ↩
-
Ipsos AI Skills General Public Survey ↩
-
Clemence et al (forthcoming), ‘AI Skills for Life and Work: The Expert View On The Impact Of Artificial Intelligence On Skills Requirements In Life And Work In The UK,’ a report prepared by Ipsos for DSIT and DCMS, (henceforth ‘Ipsos AI Skills Delphi Study Expert Engagement’) ↩
-
24% of those who were unconfident judging the accuracy and reliability of AI outputs never used it in work, compared to 16% for those who were confident. The equivalent figures for those unconfident in understanding the risks and threats associated with AI systems were 30% and 14%. ↩
-
Ipsos AI Skills Public Dialogue ↩
-
Donaldson et al (forthcoming), ‘AI Skills for Life and Work: Job Vacancy Analysis,’ a report prepared by Ipsos for DSIT and DCMS, (henceforth ‘Ipsos AI Skills Job Vacancy Analysis’) ↩
-
Wickett-Whyte et al (forthcoming), ‘AI Skills for Life and Work: Drivers Analysis Report,’ a report prepared by Ipsos for DSIT and DCMS ↩
-
Ipsos AI Skills Job Vacancy Analysis ↩
-
Fenton et al (forthcoming), AI Skills for Life and Work: Employer Survey Findings, a report prepared by Ipsos for DSIT and DCMS, (henceforth ‘Ipsos AI Skills Employer Survey’) ↩
-
Bosworth et al (forthcoming), ‘AI Skills for Life and Work: Evolution of AI Technologies, Knowledge and Skills: An Exploratory Study Using Patent Data,’ a report prepared by Ipsos for DSIT and DCMS, (henceforth ‘Ipsos AI Skills Patent Analysis’) ↩
-
Ipsos AI Skills Employer Survey ↩
-
ONS (2024), Employment in the UK: November 2024 ↩
-
Bosworth et al (forthcoming), ‘AI Skills for Life and Work: Labour Market and Skills Projections,’ a report prepared by Ipsos for DSIT and DCMS, (henceforth ‘Ipsos AI Skills Labour Market and Skills Projections’) ↩
-
Ipsos AI Skills Job Vacancy Analysis ↩
-
Ipsos AI Skills General Public Survey ↩
-
Ipsos AI Skills General Public Survey ↩
-
Ipsos AI Skills Employer Survey ↩
-
Ipsos AI Skills Employer Survey ↩
-
Seaman et al (forthcoming), ‘AI Skills for Life and Work: Stakeholder Workshop Findings,’ a report prepared by Ipsos for DSIT and DCMS, (henceforth ‘Ipsos AI Skills Stakeholder Workshops’) ↩
-
Proctor et al (forthcoming), AI Skills for Life and Work: Rapid Evidence Review, a report prepared by Ipsos for DSIT and DCMS, (henceforth ‘Ipsos AI Skills Rapid Evidence Review’) ↩
-
Ipsos AI Skills Employer Survey ↩
-
Ipsos AI Skills Stakeholder Workshops ↩
-
Ipsos AI Skills General Public Survey ↩
-
Ipsos AI Skills Labour Market and Skills Projections ↩
-
Ipsos AI Skills Job Vacancy Analysis ↩
-
Ipsos AI Skills Rapid Evidence Review ↩
-
Ipsos AI Skills Job Vacancy Analysis ↩
-
Ipsos AI Skills Rapid Evidence Review ↩
-
Ipsos AI Skills Job Vacancy Analysis ↩
-
Ipsos AI Skills Rapid Evidence Review ↩
-
Ipsos AI Skills Job Vacancy Analysis ↩
-
Ipsos AI Skills Rapid Evidence Review ↩
-
Ipsos AI Skills Stakeholder Workshops ↩
-
Ipsos AI Skills Rapid Evidence Review ↩
-
Ipsos AI Skills Employer Survey ↩
-
Ipsos AI Skills Employer Survey ↩
-
Ipsos AI Skills Rapid Evidence Review ↩
-
Examples of how these cities and regions position themselves can be found: ↩
-
Ipsos AI Skills General Public Survey ↩
-
https://www.gov.uk/government/publications/essential-digital-skills-framework ↩