Research and analysis

AI Skills for Life and Work: Delphi Study

Published 28 January 2026

This report was authored by Michael Clemence, Antonia Lopez, and Iona Kininmonth at Ipsos

This research was supported by the Department for Science, Innovation and Technology (DSIT) and the R&D Science and Analysis Programme at the Department for Culture, Media and Sport (DCMS). It was developed and produced according to the research team’s hypotheses and methods between November 2023 and March 2025. Any primary research, subsequent findings or recommendations do not represent UK Government views or policy.

1. Introduction

1.1. Objectives of the Delphi study

Ipsos conducted a Delphi study which explored the expert view on the impacts of the adoption of Artificial Intelligence (AI) on skills requirements in work and life now and in the future. The future is inherently uncertain, and this is particularly true for a fast-moving space such as AI. This makes it an ideal candidate for Delphi study. The principle of a Delphi study is that a group of experts will have a better grasp of likely futures than any one individual and that ideas from all contributors have equal value. By asking the convened expert group to assess and rank a wide range of ideas Ipsos were able to highlight areas of consensus or disagreement and identify key areas of focus for future work packages in the wider research project.

1.2. Approach

The process started in February of 2024 when Ipsos conducted a series of 22 hour-long interviews with Artificial Intelligence (AI) specialists selected for their range of skills and perspectives. They came from a diverse group of organisations including international bodies, academic institutions, tech companies, professional societies, telecommunications, professional services firms, labour unions, financial services, pharmaceutical companies, online learning platforms, and policy research organisations.

These interviews explored areas of disagreement and consensus amongst the expert group about the current and future impacts of AI on life and work. A list of preliminary issues relevant for the next ten years related to the impact of AI in life and work was derived from analysis of the interviews.

After the preliminary analysis, a quantitative survey was sent to the experts to gather their feedback on these issues and their future development. The survey asked for their views on the urgency of policy intervention on each issue. It also presented hypotheses for the future based on the key issues, asking for the experts’ view on how likely they thought these were to occur. The raw results of this survey are found in appendix 1 of this report.

The results from the survey were used to refine and determine the key issues the impact of AI would have on skills requirements in life and work now, and 10 years into the future. This report presents the key findings of the full analysis. The findings from this stage have fed into the focus of future work packages in the wider research project.

2. Summary of findings

2.1. Improving AI literacy is essential for a more inclusive AI future

  • AI literacy relies on building basic digital skills: Recognising that basic digital skills are a prerequisite for AI literacy, experts agreed that improving fundamental digital competencies is key to increasing currently low levels of AI literacy, particularly for digitally excluded communities.

  • AI technology has the potential to democratise access to knowledge: Despite expert concern about the digital divide being a threat to AI literacy and skills development, they also highlighted the significant advantage of freely available AI tools and frameworks. This was likened to a “public library” accessible to many.

  • AI literacy should be a priority for formal education and beyond: Experts agreed that incorporating basic AI understanding into the UK education system is crucial to improve AI literacy and equip all segments of the population with skills for interacting with AI. As AI is a fast-evolving area. It was felt that education should extend beyond traditional settings and promote lifelong learning to ensure the UK public can interact with AI tools as they develop further throughout their lives. This should include a strong focus on older generations, who have greater challenges with existing technologies.

  • Improving AI literacy could help to reduce the impacts of AI bias: Experts agreed that improving the public’s understanding of AI could help raise awareness about potential biases in AI outputs. This would enable them to better identify and assess these biases before they become problematic.

2.2. Upskilling and reskilling for an AI transition

  • Upskilling and reskilling will be important for both AI professionals and non-AI professionals, with adaptability being a central skill for the future: Experts agreed that AI professionals[footnote 1] will need to hone their technical skills, keep abreast of the latest developments in AI technology and aware of opportunities for transitioning to new roles as AI automates certain tasks. For non-AI professionals[footnote 2], the experts agreed they should develop a solid understanding of AI principles and applications relevant to their field of work. Above all, experts agreed non-AI professionals need to embrace the technology and be flexible to the changing skills requirements of the sector they belong.

  • Non-technical skills development is a key priority: Recognising the importance of adaptability, critical thinking and creativity as core competencies in an AI-driven world will be particularly important for the workplace. It was felt that critical thinking and language skills are particularly relevant today in the context of generative AI. These skills are required to analyse information, understand different views, and make clear arguments while working with AI systems. As AI becomes more embedded in different industries, non-technical skills become more important and will continue to grow in demand, especially for non-AI professionals.

  • Creating more opportunities for practical AI learning: Since experts highlighted the importance of ‘learning by doing’ when it comes to enhancing AI skills, more opportunities for practical learning need to be created. Employers should encourage the use of AI in the workplace to build employee confidence.

  • Building trust is vital to for the long-term integration of AI into work and life: Experts were concerned that misinformation and disinformation risks eroding public trust in AI which would hinder the adoption of AI in work and life. They agreed that there was a need to improve trust in AI while simultaneously working to make AI systems more trustworthy. Promoting transparency in AI use was seen as a way to address this issue, but there was a lack of consensus about how to achieve this.

3. AI literacy is the foundational skill

As AI continues to evolve, the concept of “AI literacy” has become prominent. Although there is not an official and single framework of what constitutes AI Literacy, this is widely understood as a “set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace”.[footnote 3] AI literacy encompasses the range of skills that are most relevant to learning, understanding, and using AI technologies. AI literacy therefore is a necessary foundation to interacting with AI.

While there is a clear need for a foundational understanding of AI and its implications for day-to-day life, the broad expert view is that the current level of AI literacy among the UK public is low. They felt that while awareness of AI is high, literacy, even in a basic form, remains low and that improving levels of AI literacy is required for a successful adoption of AI.

“From where I sit [the current level of AI literacy in the UK] is extremely low. Although there are pockets of excellence in academic institutions, if you look at levels of digital literacy as a whole [including AI literacy] it is shockingly low.”

Industry expert

3.1. The ‘black box’ problem

AI models are complex systems which have millions of parameters learnt from the enormous datasets they are trained on. For example, ChatGPT’s training data includes a huge number of databases which allows it to learn patterns and relationships that enable it to generate human-like text, answer questions, and perform other language tasks.[footnote 4] Essentially, models are built using a ‘deep learning’[footnote 5] technique which can be likened to a process of growth rather than traditional design methods used for other complex systems. This organic development means that the interpretability of their inner workings cannot be understood in the same way that a car can be for example. This makes it very hard for humans to interpret how AI models arrive at their outputs. Experts attributed low levels of AI literacy to AI’s lack of interpretability, known as the ‘black box problem’.[footnote 6] This concept refers to the idea that it is straightforward to identify inputs into AI and its outputs, but identifying how the inputs are interpreted and how decisions have been made (the in-between stage) is extremely challenging.

“I would say the majority of people are blissfully unaware of how this AI models works, because I think fundamentally, these things are quite complex and in particular, the latest generation of generative AI or frontier AI models are very complex.”

Industry expert

Experts also saw misinformation and misconceptions around how AI works as additional challenges for improving low levels of AI literacy. Some experts felt that press coverage of AI evolution has added to the hype around this technology, leading people to see it as a ‘magical’ technology or as an entity of its own, such as the ‘terminator’. Experts also feared that these misconceptions were held not just by the public, but also by some senior AI professionals.

“If a senior technologist [misunderstands how ChatGPT] works, then how is someone who is just working with the front end of these tools going to know any different?”

Expert: professional body

However, experts see potential for improving AI literacy levels as they are starting to see pockets of good understanding among different groups. For instance, one expert described London and the South East of England as the ‘beating heart of AI’. In this region, some people are already integrating AI tools into their work lives, leveraging it to enhance their productivity. Another expert also described London as the hub for AI and explained that outside of London there are lower interest levels in AI, which contributes to overall lower literacy levels.

Experts felt that the baseline level of AI literacy necessary for the UK public should be a fundamental understanding of AI’s operational logic. They emphasised that comprehending AI as a data-driven, statistical mechanism is critical. This foundational knowledge is seen as crucial for the UK public to build robust AI literacy.

“I don’t think that everyone needs to be an AI expert, but you’re going to need some awareness of how the tools you’re using are reaching the decisions that they are.”

Technology Industry expert

In common with many technological developments in the past, experts believed that a deep technical knowledge of AI is not necessary for everyday use and integration of these technologies. However, basic literacy in AI, such as identifying when products and services use AI, can help users make better informed decisions about their use.

“It’s, kind of, learning how a car works, so you can drive a car without knowing how an internal combustion engine works. So, you can use AI services without an in depth understanding how they work internally.”

Telecommunication Industry expert

3.2. The importance of understanding risks

Experts placed high importance on AI literacy not only because it helps individuals use AI more easily and confidently, but because it is necessary to manage the potential risks associated with AI adoption into work and life. AI systems are trained on data inputs and as with any data source. Data inputs into AI systems are not immune to biases, as these can be incomplete or inaccurate. If the data used to train an AI model is not representative of the real-world population for the scenario it will be applied to, there is a risk of the production of biased outputs.[footnote 7] As AI is increasingly integrated into various aspects of life and work, this has the potential of amplifying unfair or discriminatory outcomes, as well as the potential for AI outputs to misrepresent the real world.

Experts explained that the impact of bias in AI systems currently causes a broad range of issues in life and work, including negatively influencing recruitment processes. Many companies are adopting AI-enabled recruitment systems to streamline aspects of the hiring process, such as screening resumes, conducting video interviews, and assessing candidates. Without correct bias assessment, AI-enabled recruitment can further entrench biases based on factors like race, gender, age, or disability status.[footnote 8] Some experts highlighted the importance of implementing workplace risk assessments and audits of AI use to help critically assess for discriminatory impacts.

“Every employer introducing AI into the workplace needs to think about a workplace risk assessment. They need to make sure that the AI is transparent and explainable […] and they need to audit their systems for any potential discriminatory impacts.”

Expert: professional body

Experts believed that improving AI literacy would help raise awareness of the risk of bias. With improved levels of AI literacy, the public would be better equipped to assess AI outputs for bias, preventing them from creating issues in work or life. But the experts also stressed that the issue of bias is not only the responsibility of users, but that of AI systems developers. With greater AI literacy, experts expected that the public could put more effective pressure on those developing AI systems to deal with issue of bias.

“People need to be aware of biases in AI so they can hold creators to account.”

Health Industry expert

Some experts highlighted that greater awareness of the risks AI poses in perpetuating biases could also help address the issue more directly and purposefully. Experts see that the ubiquity of conversations around AI is helping to bring issues around biases into the foreground. They suggested that AI is bringing existing biases in society under greater scrutiny, encouraging more direct conversations and actions to address these in AI and in wider life.

AI amplifying bias is a very serious concern […], but what I love is that, suddenly, we’re having to deal with it. It’s like the issue of biases having to be addressed very directly because of AI. The issue of bias has always existed. It’s just now we’ve got it in a box and we can say it’s really just not right to do this this way. That is a positive outcome.”

Industry expert

3.3. Building AI literacy for the future

Experts were clear that the integration of AI into daily lives and work over the next ten years will be a dynamic process. The increasing use of AI in daily life and work, as predicted by experts, is likely to amplify our understanding and literacy in AI, effectively transforming the current landscape.

Most experts believed that it is likely that the level of AI literacy in ten years’ time will be significantly higher than it is today. They felt that as AI is integrated further into daily life and in the workplace, and AI understanding filters through the education system, literacy will naturally improve.

AI literacy will improve because it is pushing into the public domain with brute force.”

Telecommunications Industry expert

However, other experts remained uncertain about whether the UK public’s AI literacy will increase. The rapid progress and evolution of the field could be contributing to this uncertainty. Some experts pointed out that as AI tools become more user-friendly, it is likely that the required level of understanding regarding how AI processes work will decrease. This is also the case for the development of other digital technologies. While this could accelerate that integration of AI in daily life and work, experts worried that without a foundational level of AI literacy, users would have a shallow awareness of the capabilities and limits of AI applications and algorithms. This could result in poor decision-making as people are likely to be less critical of AI outputs.

Education was a topic that came up in discussions with experts about future levels of AI literacy. Experts were unsure about how long it might take a foundational understanding of AI to be fully integrated into the UK school syllabus. Some speculated it could take more than 10 years. Some, however, were more hopeful, and believed that changes the National Curriculum could be seen within a 10-year time frame.

“I would expect that there would be, in the next couple of years, some changes in how training in AI and how to work with these AI technologies are integrated into education curriculums […]. I hope that there’ll be a much more structured approach to how to train children quite early on, so that it will have some impact. But of course, the impact of that will not be seen in just 10 years’ time.”

Academic expert

Some experts suggested that AI literacy could be incorporated into the curriculum through increased education on how all data-driven systems work:

“I think you might well end up with AI [understanding] being a component of a computer science lesson, or your IT study time. I think we need more ground-up thinking about not just how AI tools work, but how all data-driven decisions, questions, how that all works, because I just think it’s everywhere, really.”

Expert: professional body

Despite uncertainty around whether and how long it might take for a greater focus on AI in schooling to affect AI literacy levels, most experts agreed that embedding a basic understanding of AI into the UK education system is necessary to improve AI literacy and help individuals develop the skills needed to interact with AI in life and work. As this is a fast-moving area, emphasis was on extending AI education beyond formal educational settings, such as schools and universities, to encompass all segments of society. As AI continues to evolve, experts see the need for a system which promotes lifelong learning to ensure the UK public are well equipped to interact with AI tools throughout all stages of life.

“The ultimate need is for a very dynamic education ecosystem that really gives that lifelong learning push.”

Expert: professional body

3.4. Barriers to future AI literacy

Although experts felt positive about increasing levels of AI literacy in the next ten years, they also saw significant challenges that could slow progress and make it uneven across the UK. The biggest barrier experts see to improving AI literacy in the future is the existing “digital divide”. This refers to the scale of digital exclusion in the UK and how digital skills vary for different groups of the population.[footnote 9] The gap that exists between those who have the access, and the basic skills, to use the internet and digital technologies and those who do not, often disproportionately affects specific socioeconomic or regional groups. The digital divide is not an issue unique to the development and adoption of AI, but rather a recurring issue of inequality that is expressed through differential adoption of most digital technologies. The experts felt that without taking steps to address the existing digital divide, AI could widen it further.

Experts believed that AI literacy is likely to improve at a slower rate for older generations and those living in rural areas, as these groups are disproportionately likely to be on the wrong side of the existing digital divide. Experts noted that these groups are at disadvantage compared with other groups since they are less likely to be proficient in navigating and utilising digital technologies effectively.

“I think a lot of people, certain workers with advanced digital skills, they had a head start when it comes to using this. And that head start is what really has exploded the difference between workers.”

Public Sector expert

Other barriers that experts saw as contributing to lower AI literacy were limited access to AI interaction and educational opportunities. Further, during the interviews the experts also noted that there is already a significant regional AI divide. A significant amount of current AI development and interaction is occurring within white-collar professions, especially in London and South East, creating what they referred to as the “London AI bubble”. People in this region are likely to interact more often with different technologies and have greater access to AI training opportunities, contributing to higher AI literacy levels compared with the rest of the UK.

“People in the South East [of England] are going to benefit more rapidly from AI than people everywhere else because of the massive regional inequality of the UK. […] Just like any other expensive technology that comes along, it will aggravate inequality, unless some action is taken”

Industry expert

Experts also identified significant barriers to AI literacy for those outside the labour market (due to factors such as retirement, unemployment or long-term sickness) as this group is often left behind when it comes to adapting to new technologies. This group is more likely to lack access to necessary resources, or training needed to understand these new technologies. Experts stressed the importance of helping this group to stay up-to-date with technological advancements to prevent a significant portion of the population from becoming technologically isolated.

“In the UK and across the world it’s usually a sector of the population where, once they’re outside of the labour market, they’re, sort of, on their own. But it’s important to still have them keep up otherwise we’re going to have an entire third of the population that’s not going to be adapted to new technologies as they change.”

Public Sector expert

However, experts believed overall that AI is likely to exacerbate the digital divide in the next ten years, highlighting it as a serious concern for the near future. Across interviews, experts warned that without equitable access to AI tools and knowledge, the gap between the digitally included and digitally excluded could widen. This could leave many without the literacy to improve their AI skills for life and work in and adapt to an AI-driven world.

Some held countervailing perspectives however: one expert argued that AI tools have the potential to level the playing field. Many of these tools are available at no cost and have been designed to be very intuitive to use, meaning anyone with a laptop and internet connection can tap into these resources to learn and innovate. While acknowledging that societal factors still create barriers, the expert emphasised that the open availability of AI tools and frameworks could be a significant advantage in widening access. Experts were also keen to stress the importance of keeping AI tools open-source since this allows people to freely access, modify and build upon existing models, democratising AI development beyond AI professionals and leaders in major tech companies.[footnote 10] However, it is important to recognise that that the widespread availability of these resources does also present risks; individuals with lower AI literacy might be more likely to misuse AI tools, or their use could lead to negative and unintended social consequences.

“The availability of tools and frameworks freely, I think, is a big equaliser […]. It’s what classically you would think of the public libraries which provided everyone access to knowledge. This is the AI equivalent, that we have a public library of AI tools that anyone can use freely.”

Academic expert

4. New and not-so-new skills for life and work

4.1. Future skills requirements for AI professionals

Traditionally, experts in the field of AI have required an advanced understanding of mathematics and computer science. But the rapid evolution of the field also seems to be changing the required skills for these professionals. For this Delphi study, an “AI professional” has been defined as a professional whose primary occupation is dedicated to building and developing AI. In the vacancies analysis part of the project would be classified as an AI experts or AI specialist. This can include roles related to software engineering and development, data science and engineering, machine learning and programming.

AI professionals are at the heart of the changes brought about by AI. As they are responsible for designing the transformation of the AI ecosystem in a safe and effective way, ensuring they have the correct skills to do so should be a key priority. When asked about the future development of skills requirements for AI professionals, there was clear consensus among experts around the likelihood of AI professionals experiencing a high level of change to their skills requirements in the next 10 years. Despite this expectation of drastic change, experts are optimistic about the ability of these professionals to keep pace with changing skill requirements. They felt that younger generations are particularly self-motivated to upskill in AI and are making the most of the open-source nature of many of the AI tools available.

“I teach data science and now my students will use code, and now they’re using more and more AI generated code and analysis, they’re creating reports that are auto-generated by data and then maybe making changes, etc.”

Academic expert

Some experts pointed out that as AI progresses, it may replace some of the more traditional data and engineering roles via automation. Therefore, although AI professionals are highly likely to see a change in their skills requirements, this does not necessarily mean a linear progression of their current skills. Some AI professionals may need to upskill into different roles if their current role is threatened by automation, highlighting the need for the core skill of adaptability.

“The principal example driving redundancies in the tech industry at the moment is the very highly skilled programmers and machine learning people have automated themselves out of their own job by creating programs that can write programs.”

Academic expert

4.2. Future skills requirements for non-AI professionals

Compared with an expectation of significant changes in skill requirements for those working in the AI sector, experts believed that the necessary skills for effective interaction with AI for non-AI professionals will look less different. For non-AI professionals[footnote 11], experts explained that AI is likely to be applied to their roles to improve productivity and enhance decision-making, rather than changing the roles they are performing. With this comes a need to improve and develop a range of AI skills to make the most of the technology. Experts were agreed that current and near future development of AI technology will primarily focus on automating some of their specific tasks rather than replacing their entire job roles.

“[AI] shifts the human role from trying to solve the problems themselves to really asking very good questions and to monitor the outcome of these complex processes to make sure that they are working for the purposes that are intended.”

Public Sector expert

Experts felt that skills requirement changes for non-AI professionals will differ depending on the sector they work in, but that it is likely all will experience some change. AI is expected to automate routine tasks, potentially shifting the job market towards roles that require more complex, AI-related skills. Experts were able to give some indication to the change that different sectors might face. For instance, they noted that white collar workers would see more change to their skills requirements than those in the service industry or people whose role involved manual labour since more of their tasks are already using digital tools and can be more easily automated.

“I think there will be an expectation, in the same way that you might see on job applications now, you need to be able to use basic proficiency with Office packages. I think there will be an expected basic proficiency with whatever the flavor of AI tool is that the company uses.”

Non-Profit Industry expert

While the implementation of AI in most workplaces in the UK is still at an early stage, experts thought it was important that non-AI professionals are encouraged by their employers to use AI models in their work lives. Experts see workplace AI training as fundamental for helping employees becoming more comfortable using AI and helping them to develop skills for effective AI use applied to their work setting.

“I think there probably will be specialist courses. You can go on a financial management course now and you will probably be able to go on an understanding how to use AI in your organisation course.”

Expert: professional body

Throughout the interviews, experts also placed high importance on improving non-technical skills when it comes to the adoption of AI in work lives. Experts see non-technical skills such as critical thinking, decision making, and creativity as particularly useful for the UK public when interacting with AI. Experts view is that these skills are crucial for effectively communicating with AI, translating outputs into practical applications, whilst also monitoring outputs for validity and trustworthiness. For example, crafting prompts for large language models is likely to be an increasingly in-demand skill for non-AI professionals in many workplaces.[footnote 12] Prompt development does not rely on technical skill, but rather effective language skills, sector expertise, and hands-on engagement with the AI system to enhance a ‘learning by doing’ approach.

“If they are people that are going to use [AI] in the office place, for their day-to-day work, having a bit of creativity skills, and problem-solving skills, usually also comes very handy.”

Tech Industry expert

When asked about the value of non-technical and technical skills in the future, there was a lack of consensus among experts. Many said it was likely that developments in AI would make technical skills less relevant than non-technical skills in the workplace. They felt this was because AI will become simpler and more embedded in existing workplace tools, reducing the need to develop specific technical skills. This can already be seen with the ease of which large language models (LLMs) can be used. If the usability and versatility of AI models continues to improve, it is easy to see how technical skills may become less important for life and work.

However, others felt it was unlikely that technical skills will become less relevant: rather, non-technical skills and technical skills will be of equal value in 10 years’ time.

“I’m not going to say we need lots of prompt engineers because I think that is something which burst onto the scene 18 months ago and is gradually withering away because the interface of these models and the models themselves are becoming more sophisticated.”

Professional Services expert

The rapid evolution of AI makes it difficult to know which skillset will be more valuable in the future. But above all, experts see adaptability as a core skill that workers need to keep up with changing skills requirements in the future. The experts felt that those who embrace AI technology, learn how to leverage it for productivity enhancements, update their knowledge through lifelong learning, and prioritise its safe and transparent application, will be the ones who thrive in an AI-driven world.

“In all cases, I think there’s quite a lot for AI to offer and there will be more and more difference, basically, between those who can adapt to AI-related changes versus those who don’t. I think they will be very much affected.”

Public Sector expert

One specific skill where there was a lack of consensus was the public’s ability to differentiate between AI-generated and human-generated content. Most of the AI tools available to the public currently deal with generating content. For instance, models like ChatGPT and DALL-E are commonly used for text and image generation. In this context, some experts thought it was critical for the public to discern between AI-generated and human-generated content. But others argued that this should not be a necessary skill. Instead, transparency should be ensured through regulation that mandates the public disclosure of AI use. They felt that clear disclosure of when AI tools have been used to generate outputs is essential for successful integration of AI into life and work – even though the nature of sharing over the internet quickly blurred these lines.

“Over time the only way you are going to be able to tell [something is AI generated] is because the originator tells you it was using AI. But, also as soon as that document, that image, that video, that piece of music gets copied and circulated, you’ve lost that transparency. So, we see it already in fake news. Things get just circulated and recirculated, and the original, you know, description and metadata is lost.”

Professional Services Industry expert

Despite a lack of consensus on how to address the disclosure of AI generated content, experts agreed on the need for ethical and technically proficient leaders in AI. Prior to the integration of AI into work, leaders should acquire the necessary knowledge to help them mitigate potential risks early in the development process.

“You need to develop a supervisory network of people that can look after AI behaviour and make sure it’s all beneficial. And these people need to be trained in what is it that AIs are doing and how do we control it. And these things can happen very quickly, but you need 3 to 5 years to train people up to work in that space, so people need to start sitting down and doing scenario planning.”

Telecommunication industry expert

The threat of misinformation and disinformation was another topic of relevance for all experts. There is concern from experts that if the volume of AI-generated misinformation continued to increase this could imperil trust towards any AI generated content. Experts stressed the importance of public trust in AI systems when it comes to integrating it into life and work, explaining that it is vital to build a transparent AI ecosystem to help foster trust among the UK public and increase adoption rates.

“A bad version of the future is one where people don’t trust anything that has been created by AI. That could happen if we don’t control the misuse of the technology. So, we urgently need regulation, and we need control.”

Telecommunications Industry expert

Risks associated with a lack of transparency are already being identified by experts. They warned that inaccurate or misleading content produced by AI has the potential to undermine democratic procedures and institutional frameworks. This could also allow for the impersonation of creative processes.

“If we see that bad actors are using it [AI] to control elections, for example. Or if terrible deep fakes are pumped into the internet, and we can’t prevent this from happening… People don’t trust anything they see anymore, [they could] develop a very negative attitude against anything that is created by AI. That could happen if we don’t control the misuse of the technology.”

Telecommunications Industry expert

Experts also acknowledged that it is increasingly difficult to differentiate between AI- and human-generated content, leading some to question the importance of training people to identify when content has been created using artificial intelligence. The conversation sits within a broader debate about which sources people choose to trust, and how governments and organisations can help with identifying misinformation and disinformation, regardless of its source.

“There is zero importance [for people to learn how to recognise AI generated content]. I could already present you documents, images, videos, music, and you wouldn’t be able to tell that it’s generated by AI. In a lot of cases you can currently still spot them but, to be honest, within a year, or less [it will be a different story].”

Professional Services Industry expert

Appendix 1: survey results

This appendix presents the results of a small-sample survey of the experts who participated in the Delphi exercise. It asked for their views on the urgency of policy intervention on a number of issues identified in the first-stage interviews. It also presented some hypotheses for the future and asked them for their view on how likely they thought each was to occur.

Figure 1: Expert view of the urgency of policy intervention for each key issue

Question: How urgent, if at all, is the need for policy intervention for the following issues relating to the impact of AI in life and work?
Source: Ipsos, UK
Base: 18 AI experts, online survey, April-May 2024

Figure 2: Expert view of the likelihood of the future development of each key issue

Question: How likely, if at all, is the development of each of the following issues related to the impact of AI on life and work in the next 10 years.
Source: Ipsos, UK
Base: 18 AI experts, online survey, April-May 2024

  1. For this Delphi study, an “AI professional” has been defined as a professional whose primary occupation is dedicated to building and developing AI

  2. For this Delphi study, a “non-AI professional” is defined as someone who works in the wider UK economy, whose primary role is not in data or AI, but who is in a sector which may be impacted by this technology. 

  3. AI Unplugged (Georgia Tech University) 

  4. Open AI: How ChatGPT and our foundation models are developed 

  5. Researchers are figuring out how large language models work (economist.com) 

  6. Harvard Business Review: AI’s Trust Problem 

  7. Chapman University article on Bias in AI 

  8. Nature.com: Ethics and discrimination in artificial intelligence-enabled recruitment practices 

  9. Office for National Statistics: Exploring the UK’s digital divide 

  10. Tech UK: How AI is Increasing its capabilities with open-source foundation models 

  11. Non-AI professionals are defined in this study as those who work in the wider UK economy, whose primary role is not in data or AI, but are in sectors which may be impacted by this technology. This includes both “white collar” workers (those who engage in office-based jobs) and “blue collar” workers (who engage in manual labour-based roles). Non-AI professionals are individuals who are starting to use AI tools as they become integrated into their workplace as well as those who are expecting to do so soon 

  12. Financial Times: Retraining workers for the AI world