Research and analysis

AI Skills for Life and Work: Stakeholder Engagement Report

Published 28 January 2026

This report was authored by LaShanda Seaman, Jamie Douglas, and Eva Radukic at Ipsos.

This research was supported by the Department for Science, Innovation and Technology (DSIT) and the R&D Science and Analysis Programme at the Department for Culture, Media and Sport (DCMS). It was developed and produced according to the research team’s hypotheses and methods between November 2023 and March 2025. Any primary research, subsequent findings or recommendations do not represent UK Government views or policy.

1. Project overview

1.1. Summary of work package

This work package engaged with stakeholders to explore their perceptions on AI skills for life and work, as well as the feasibility for the implementation of these skills across the wider population. Stakeholders from seven industries attended a 3-hour online workshop to explore current and potential uses of AI in their industries, the skills required to support AI development, and the policy options recommended to support this.

1.2. Objectives purpose:

The purpose of the workshops was to cover the following key issues and areas:

  • How broadly/narrowly AI skills should be conceived: and to articulate what skills stakeholders felt were likely to be required over a 2, 5 and 10-year timeframe.
  • Attitudes to different AI scenarios in life: including the feasibility of the public scenario developed in the public dialogue.
  • Perceived challenges and opportunities: of the different scenarios for the development of AI skills, particularly for developing AI skills for life and work over a 2, 5 and 10-year period (barriers, opportunities, opportunities to develop skills).
  • Suggestions participants may have had in terms of policymaking and core levers required to accelerate and develop AI skills for work and life.

1.3. Key research questions and findings

Research question Contribution of this work package
What AI-relevant skills are needed for life and work? While this work package focused on AI-relevant skills in work, employers recognised the connection between general knowledge and usage of AI in daily life with its application in the workforce. Key skills included:

Non-technical digital literacy: Employees and employers wanted to be clear on how they could apply AI within their roles. They wanted better training to understand how AI could be a useful tool to support them in their roles. Some employees consider the foundational skills which could be useful. Most employers assume their employees already hold these.

Critical thinking: Recognising the value and role of humans in conducting tasks, preventing overreliance on AI. This means ensuring that humans will be required to critically think about outputs and assess AI performance.

Understanding of regulation and data rights: Employers and employees require confidence around knowing their rights in relation to how the data they input into AI systems will be used, for example, who will own the data.
How may these skills transform as the technology develops? Stakeholders felt strongly that use of AI would impact skills development, as well as require new skills be developed.

Loss of key skills: Stakeholders were concerned that AI would lead to the loss of key skillsets such as creativity, operating machinery, and thinking critically.

Potential for new skills development: However, they also discussed how AI may prompt the need to develop some skills. This included critical thinking, as well as basic digital literacy (for certain demographics), or prompt engineering (for more advanced technology users).
To what extent does the UK have or lack these needed skills in the labour force? Stakeholders often found it difficult to identify specific AI related skills, aligning this with digital skills and technologically adaption more broadly. As such, there was concern about specific groups (e.g. older people), industries and locations (e.g. more rural) that were already falling behind and would be further hindered by AI.

Stakeholders were more able to discuss expert skills such as coding and development of AI systems. Non-technical skills such as critical thinking, adaptability, effective communication did not immediately come to mind but were considered important once raised.

While participants had awareness of AI systems, they did not share the confidence demonstrated by the public in Work Package A for using them. This centred around the risks of corporate data – business secrets and customer data, as well as an appreciation for the diversity of skills levels across different demographics such as age and region.
To what extent is the UK supporting people to develop future relevant AI skills? Stakeholders generally felt that the UK is not currently supporting people to develop future relevant AI skills, both in terms of the general public and businesses. Stakeholders in education, for example, believed that extensive guidance and resources should come from the government to enable this development of future relevant AI skills at school level.

Training was a key request from participants, but there was uncertainty over who was responsible for delivering this. This was seen as a key mechanism to even the playing field and provide a basic understanding of AI applications to employees.
Based on potential future scenarios, what should government, employers and private education and skills providers focus on to address any gaps in provision? Stakeholders reflected on the policy considerations which could ensure the workforce was prepared for the development of AI in the future. They discussed the need for the government to champion basic online literacy development for certain demographics of both employees and the general public. They envisaged this occurring through school-based learning as well as workplace training and upskilling. Another significant priority shared was the importance of focusing on data privacy and ensuring it is kept secure.
What can the UK learn from international counterparts with regards to AI skills? Stakeholders suggested the need for wider collaboration on AI, especially in terms of developing regulation and ensuring data privacy is protected. They also recommended taking inspiration from countries which are more advanced in terms of AI knowledge and development. Incorporating these countries’ teaching and training methods into education and workplace training could be a beneficial endeavour in the UK. However, there was expectation that government would lead the way on identifying these sources.

1.4. Relationship to other Work Packages

This work package was informed by content from all the previous work packages, particularly the draft scenarios developed in Work Package A (WPA) with the public. Some key findings of this stakeholder engagement relating to other work packages include:

  • Participants felt it was currently easy to use AI-powered tools, and that this would get easier as technology improved. This contrasts with the general public survey (WPB), where only 21% of the public say they are confident in their ability to use AI in daily life. This likely relates to different framing of ‘skills’ – participants tended to think about skills in terms of getting desired outputs from an AI product. This is a narrower framing of the skills required to use AI than the collection of skills asked about in WPB.
  • Findings on the importance of skills associated with risks and threats, particularly discernment and disinformation, reflect findings in the Delphi study (WP2). Both stakeholders in these workshops, and experts engaged in the Delphi study, felt that there was a need to build trust in AI systems, and that this could only arise if transparency was promoted, and the public felt that potential risks and threats were addressed by strong policy interventions.
  • Ensuring that AI literacy skills are embedded at the primary and secondary level reflects key findings in the rapid evidence review (WP3), which suggested that the focus in the UK to date has largely been on AI literacy and skills at the tertiary education level. Participants also pointed to the UK being further behind in terms of its AI literacy than other countries.
  • Unlike the rapid evidence review, participants were more confident in natural skills development for the use of AI in life. WP3 suggested that digital skills might need to be reviewed every three years, whereas participants in this study expected AI tools to generally become more, not less, user friendly, and so not demand the development of new skills.

The researchers presented the draft scenarios developed in the public dialogue (WPA) to stakeholders in WP6 so that they can be refined, stress tested and finalised. This will enable the Work Packages to include the perspective of employees and employers and inform the subsequent policy facing and knowledge transfer work taking place in Work Package 7 to equip policy makers with insight on key skills challenges and barriers.

Figure 1: Diagram demonstrating relationship to other work packages

2. Executive summary

2.1. Background

Ipsos was contracted to conduct research for the Department of Science, Innovation and Technology (DSIT) to explore the requirements for AI skills for life and work across the next 2, 5 and 10 years. This work package (WP) was designed to engage with stakeholders across seven different industries and refine the AI skills scenarios that were developed in WPA with the public. Across the six thematic areas relevant stakeholders were recruited to discuss AI skills in their industry and refine the scenarios. The areas under exploration were AI in the home and personal devices (smart devices, mobile devices, and telecommunications), AI in leisure and entertainment, AI in travel and transport, AI in education (including publishing), AI in healthcare, as well as AI in work and career (professional services). 105 stakeholders were recruited to take part in a 3-hour online workshops which included plenary presentations and breakout group discussions.

2.2. Research Aims

  • Build on the public dialogue (WPA) by refining and developing the scenarios across the six thematic areas.
  • Inform subsequent policy-facing and knowledge transfer outputs in WP7.
  • Build on research findings collected in the other WPs which have explored skills development with different stakeholder groups.

2.3. Research Objectives

  • Explore and understand stakeholders’ attitudes and perceptions of six different AI scenarios.
  • Explore the feasibility of the public scenarios developed in WPA.
  • Understand stakeholder’s perceptions of the challenges and opportunities of the different scenarios for the development of AI skills, particularly over the 2, 5, and 10-year period.
  • Understand public perceptions on policy levers required to accelerate and develop AI skills for life for different industries.
  • Key points of optimism and pessimism for each topic area
Point of Optimism Point of Pessimism Quotes
AI in smart devices, mobile devices, and telecommunications Saving time: Stakeholders noted that it could improve products and processes, particularly customer-facing tools enabling employees to utilise their time more efficiently on higher-value tasks. Providing access for vulnerable groups to technology as it develops. Increased uptake within the home: expected to occur in the future as the availability of the technology becomes more commonplace. Uncertainty with deployment: Customers raised frustration with chatbots, data security concerns and how the outputs would be used by organisations, citing the need for improved monitoring. Cost of access: Uncertainty over whether equal access to technology will be available for all customers. Skills gap: Lack of understanding around AI risks and limitations which makes it difficult for employers and employees to understand the potential impact on jobs. This includes changes in scope (e.g. prompt engineering) or job losses. “It makes you feel better if you work somewhere and they’re going to train you feel a bit more valued” – Employee, Customer services “Those who don’t have massive data packages on the mobile phones are already disadvantaged to those who have unlimited data. And that could be anything from, uploading photographs or listening to music to navigating through a city and. Or translating into a million and one languages in a city that you’ve never been in.” – Employer, Internet hosting
AI in leisure and entertainment Increased efficiencies: Streamlining tasks like administration, planning and concept development enabling them to save costs assuming that employees develop relevant skills. Widened applications: Assumption that AI would be integrated into more technologies such as fitness trackers. Maintaining human element: Despite concerns for the development of AI, others felt hopeful that human interaction would continue to be desired. Future of the industry: Concerns were raised around job displacement particularly for creative roles, protection of materials produced and data/identity protection. Loss of experiences: Traditional leisure experiences could disappear in the future impacting those who work in these spaces. Lag between development and legislation: This was important given the ethical and legal implications of using AI inappropriately (e.g. creating music), but there were low levels of optimism that this could be effectively deployed. “…AI is going to give people similar pressures where people are going to feel empowered to say, well, this is genuinely mine and I’m going to write it for myself. And you can, you can actually put on the label. Written by a human will become a. Almost become a marketing point.” – Employer, Art, and design “Acting is my, is my livelihood. As an individual and I’ve got other income streams. But for those people who are just actresses, then of course having some protections which means that they will not get rid of, real actors and actresses altogether is essential. CGI is already, doing a lot of crowd scenes and things like that. Anyway, so again, it’s this kind of extension of that.” – Employee, Music venue
AI in professional services Improved efficiencies to help improve productivity and profitability in professional services by automating tasks and supporting business planning. This is expected to span all levels of the business. Ethical concerns were raised around the potential for organisations to be AI-washing during its implementation. Demand for training particularly at senior levels to ensure that there is not a skills gaps within organisations or an overreliance on AI within organisations. This includes training of AI systems so that they can be effectively deployed within organisations. Quality assurance of AI outputs and appropriate prompt engineering. “For existing workforces, employers have to be the ones that are taking the responsibility for upskilling and readying, their workforce for this. Educational establishments have a role to make sure that, if you’re coming out with a degree, it’s one that has the skills necessary for a job.” – Employer, Management and business consultants “All of our sort of standard documents, engagement letters, etcetera, are all building in references to AI. But as with all sorts of professional services firms, they’ll follow whatever the legislative requirements are. If there is new legislation, again, policies will be updated to refer to that. But these things always tend to lag a bit behind. So, who knows when that will be.” - Employer, Management and business consultants “Support needs to be given to small businesses. It’s in the interest of the government to foster and drive that entrepreneurship.” – Employer, Production company
AI in education, publishing and marketing Improved efficiencies for educators if this is deployed widely but would need to be supported by the appropriate protections. Data protection was a key concern particularly given the potential use of AI by minors. Training for educators on the technological skills to implement AI, identify when it has been used and ensure that key skills are maintained for children in deploying the technology. Lag between oversight and deployment of standardised systems, updates to the curriculum and national roll outs. “And at schools, I don’t think you need to think about it until you’re at a senior level where you’re getting into coding and tech, maybe not senior, whatever the level is at which you start to think about teaching coding, but I think on a very basic level, we’re at that kind of crossover period at the moment where it’s taking up a lot of energy because it’s all new and scary and there’s lots of. Of conversations going on, just as there were at the beginning of the Internet, which I can remember very clearly.” – Employer, Publishing “But if you’re, if you move teaching purely to an AI based system without teachers, and I think you lose. You’re not challenging students to use their own experience to come to conclusion, they’re using it as AI is used as a crutch, then you’re going to disenfranchise a whole load of decision makers in the future. I think that’s the challenger.” – Employer, Publishing
AI in travel and transport Safety improvements by implementing AI in travel and transport was seen as a plus by stakeholders. Trust in AI systems by those who work in the industry to use travel and transport systems. Retention of skills as AI systems are deployed to ensure that skills are not lost within this space. Concern for job losses as AI replaces tasks such as customer service and what this will continue to look like in the future. “So, there’s a lot of people within my industry are, particularly in the writing part or in website management, which I do a bit of, who are just going all in on AI. And what it’s not doing is producing anything new. It’s only rehashing what’s already there when it comes to content.” – Employee, Tourism “It’s not going to go away, it’s how we embrace it. And fundamentally, from a business point of view, it’s always coming down to business case, a lot of what we talked about today would require huge investment, which just isn’t going to be there. And I can’t see some of these things changing for years and years and years. But where there are benefits either, whether in data and that kind of thing, or the administrative thing, then that will happen and it will be a case of finding how we then reprogram and redo things and re-educate and retrain people to do other things like we had to do when the Internet was first invented…” – Employer, Transport customer service
AI in healthcare Improving efficiencies in healthcare processes and delivery of treatment to patients was supported by stakeholders. Ensuring data security and upholding patient privacy was a concern for stakeholders given the sensitive data they handle. Uncertainty about deployment due to funding constraints and complexities around national roll out for the system. Additionally, when it comes to staff there was uncertainty around the digital skills currently within the NHS. Losing human interaction within healthcare was a key concern for stakeholders when they considered the impact on patients and maintenance of critical thinking during diagnostics. “it’s the fear of losing their job to digitalisation. For instance, it’s already happened. There is now no more receptionist to book you into an outpatient clinic. You have to do it on a screen as a patient. So that person who was booking them in has now been moved to another role and not very happy about it… is unsettling for the staff that are there, particularly of a certain age group.” – Employee, Healthcare Trade union “I think AI can help in that respect, streamlining processes especially, and just making things more efficient. Yeah, you know, things are, it seems like things are being done the way they were done 30 years ago. That’s how it seems like. It’s just really, it just doesn’t make sense. A lot of things, too much red tape, too much you and, you know, bureaucracy.” – Employee, Healthcare worker

3. Methodology

3.1. Engagement aims

This stakeholder engagement aimed to bring together a cross-section of employers and employees from seven different sectors to explore the feasibility and acceptability of the public scenarios, as well as the pathways to ensure they are successful. The aim of these workshops was to refine the scenarios drafted in the public dialogue (WPA) by engaging with relevant industry stakeholders and enabling them to consider the future development of AI in their sectors alongside the skills required.

We recruited 105 participants using our recruitment partner Criteria, reengaging employers from Work Package C ‘Employer survey’, and open-source recruitment on LinkedIn. A recruitment screener was used to ensure eligibility with the process. Participants took part in an online 3-hour workshop on Zoom which included plenary sessions and breakout groups.

3.2. Workshop approach

The workshops were designed so that stakeholders could explore key issues around the development of AI in their sector and the skills required for this, as well as understand the policy levers that could be utilised in this area. Stakeholders from each industry participated in workshops (Table 3.1) related to the thematic areas covered in WPA.

Table 3.1: Overview of industries engaged in WP6

Thematic Area (WPA) Industry (WP6)
Education Education Responsible AI*
Leisure and Entertainment Leisure and Entertainment Responsible AI*
Work and Career Professional Services Responsible AI*
Travel and Transport Travel and Transport Responsible AI*
Healthcare Healthcare Responsible AI*
In the home Smart devices, mobile devices and telecommunications Responsible AI*

*Responsible AI was a 7th workshop with AI specific stakeholders, such as AI researchers and policy advisors, and covered each of the thematic areas.

Our approach comprised of four parts, which allowed stakeholders to explore previous findings from the research, before discussing industry-specific priorities and future expectations.

Figure 3.1: Diagram outlining the research approach

The beginning of the workshop provided an opportunity for stakeholders to reflect on the current usages of AI across their industry and discuss their expectations for how AI could develop across the next 10 years.

This was followed by a presentation of research findings from the completed WPs in Part 2. This provided all stakeholders with a shared foundation of AI and common understanding for the breakout discussion which was important given the complexity of the topic.

The breakout groups enabled participants to focus on the feasibility of the public scenarios, offer insights from their individual industry perspectives, and explore how this would be delivered. The aim of this discussion was to determine the skills that stakeholders believed would be required and the policy levers to deliver this.

  1. How to read this report

This report follows on from the public dialogue (WPA) report. It includes feedback from stakeholders on six thematic areas. Each chapter includes current perspectives on AI usage within that organisation, views on how AI would develop, skills employees would need in the next 2, 5 and 10 years, conditions for acceptability, feasibility of the public scenario and the finalised scenario they have devised.

Throughout the report we have included wording which is standard in qualitative reporting to express the strength of feeling from stakeholders or the number of stakeholders who shared a point of view. Where possible there are also differences between employers and employees as well as comments from individuals who work in specific industries. There is some guidance below to help navigate this report:

  • Typically, sections include views that were most common first before including those that were expressed by a subset of stakeholders.

  • Language such as a ‘a few’ or ‘a limited number’ is included to reflect views that were mentioned infrequently, and ‘many’ or ‘most’ when views were more frequently expressed. Whereas ‘some’ is used when a viewpoint is mentioned some of the time, or occasionally.
  • We aim to include language that will demonstrate the strength of feeling.
  • Please note this report includes the perceptions of the public on AI / AI skills rather than actual facts.

3.4. Interpretation

Participants were recruited to provide a broad coverage of each industry with a range of sub-sectors represented. As a result, they did not always align on their priorities for AI skills development or the policy levers which should be employed. The research team have reflected the nuances in discussion within relevant sections.

Additionally, stakeholders often found it difficult to articulate areas of importance to them related to the skills needed for the future, or did not find these skills to be necessary. Therefore, the research team have indicated in this report where they have made interpretations of these priorities based on what stakeholders have discussed, rather than reported a stakeholder’s belief directly.

4. Overview of scenarios

This work package was designed to enable the refinement of the AI skills scenarios developed in the public dialogue (WPA) across six thematic areas. A final version can be found at the end of each chapter in the form of a visual graphic and accompanying summary table which is relevant to the stakeholders.

The visual graphic demonstrates the technological advances that both the public and stakeholders expect to occur over the next 10 years alongside the skills required in that time period. The public and stakeholder point of view is compared side by side.

To accompany this is the summary table, which provides further details to the types of skills, enablers and barriers for development, feasibility, and policy implications in each thematic area. As highlighted in the public dialogue (WPA) report, these areas are built on the Go-Science Future Risks of Frontier AI Scenarios [footnote 1].

5. AI in smart devices, mobile devices, and telecommunications

5.1. Overview

 Key Findings

  • Current benefits of AI were that it can improve products and processes, particularly through customer-facing tools like chatbots, which can allow staff to focus on more complex queries. Stakeholders also recognise the potential of AI to support vulnerable individuals and improve management and monitoring tasks.
  • Customer frustration with chatbots, data security concerns, the need for widespread understanding of AI outputs, skill gaps in understanding AI risks and limitations (including prompt engineering), and potential job displacement were key challenges. Ensuring affordability and access to advanced technologies and addressing potential negative economic impacts of job losses were also concerns.
  • Within two years, a division of labour between AI and humans was expected, with AI handling basic tasks and humans focusing on higher-value tasks which require more critical thinking.
  • In five years, wider adoption of current AI applications was anticipated, including AI assistance for vulnerable individuals and increased use in management and monitoring. Job displacement, particularly in middle management, was a concern, but may be mitigated by upskilling.
  • Across the 10-year horizon, AI was expected to be commonplace in the home. Concerns shifted towards the cost and risks of AI technologies, particularly the economic impact of potential job losses and the affordability of advanced technologies like self-driving cars.
  • Suggested policy implications included the need for robust data protection and security policies, widespread training and upskilling in AI literacy, and potential measures like an ‘AI tax’ to mitigate job displacement.

5.2. Current perspectives on AI in smart devices, mobile devices, and telecommunications

Stakeholders working with smart devices, mobile devices and telecommunications highlighted several current areas of interest and concern.

Stakeholder organisations were using AI to improve their products and processes. A common customer-facing AI tool was chatbots, which helped to answer simple or frequent customer queries. This streamlined their triage processes, allowing staff to focus on more complex queries. However, employees reported that chatbots were not popular, with customers often preferring to speak to a real person regardless of how complex their query was.

We get quite a lot of customers who kind of get frustrated that they can’t go to a human straight away. Learning from the customer service side around AI is going to be pretty important.” – Employee, Tech support

While this had resulted in some tension between business and customer demands, some stakeholders expected this to improve over time as chatbots become more realistic and better able to handle complex queries. In addition, some stakeholders suggested customer expectations may change. Stakeholders noted that voice assistants like Siri and Alexa were already the most common uses of AI among consumers, and they expected customers to become more comfortable with chatbots as they become increasingly integrated in the interface between people and devices.

Data security was also a significant concern for stakeholders. Many stakeholder organisations had adopted AI, but only in protected environments, such as Copilot as part of Microsoft. They reported their organisations being focused on getting data protection and data security policy correctly set up before adopting AI more widely.

Another important area for stakeholders was understanding AI outputs. There was consensus among stakeholders that both employers and employees at all levels needed skills to understand how to evaluate AI outputs, although they noted that different roles would interpret and use these outputs differently. For example, stakeholders suggested employers and senior decision-makers would be more likely to use AI to develop strategies, while service desk employees would be more likely to use AI to check solutions for customers.

“To understand whether the output is right or wrong, and often we’ve seen it wrong, that’s a skill that would be at any level on that ladder. How then you interpret and how you use that information, that’s when it can change [but] I think you still apply the same set of skills.” – Employee, Product manager

Stakeholders emphasised the importance of technical AI skills in their industry, such as coding, software development and programming, to develop and implement AI technologies into smart devices and mobile devices. Equally important were non-technical skills, particularly around analytical thinking, adaptability, and effective communication. These skills were considered essential for employees to interact with AI (through prompt engineering), to explain AI outputs to customers, and for employers to judge the appropriateness of AI applications in their businesses. This opinion extended to all industries, but especially those where stakeholders considered AI to have greater potential to impact or replace jobs, such as law and architecture.

“I work in a support business, and we’ll need to understand how we can utilise AI to provide better support to our customers.” – Employee, Consultant for support business

However, stakeholders identified notable gaps in AI skills in their industry, particularly in understanding the risks associated with AI and data protection. Given that AI adoption was still in the early stages, stakeholders felt individuals had not yet had sufficient opportunities to learn from experience or past mistakes.

“It’s always data protection and the risks as a whole because it hasn’t been around that long, and … we haven’t really learned from our mistakes yet.” – Employee, Customer service

Furthermore, they felt there was a lack of understanding around prompt engineering and the limitations of AI. For example, one stakeholder felt there was a widespread misconception that ChatGPT was trained on current data, but knowing these gaps and limitations could help to set realistic expectations regarding AI’s capabilities.

“The interpersonal skills are important. It’s all very well using AI, but they’ve got to be able to inter react with customers.” – Employee, Customer advisor

5.3. Future expectations - 2 years

In the next two years, stakeholders expected a division of labour between AI and people to emerge, with AI covering basic or frequent tasks, and people covering more high-value tasks.

“All the basic tasks will be covered by AI, probably even more efficient than what we would have done. Then we just focus on something that brings more value, which will be beneficial to individuals and companies.” – Employee, Product manager

Among smaller businesses, stakeholders suggested there would be an increased focus on in-house upskilling. One employee noted that AI technology is expensive, and smaller businesses cannot typically afford both AI technology and people with specific technical AI skills. They would therefore need to upskill existing staff. An employer added that in-house upskilling was also an important part of making employees feel more valued and reassured when adapting to AI.

“[Small businesses] spend so much money on the actual AI that they can’t actually afford to then employ anybody with this AI like skills. It’s expensive product. I think that’s a barrier. You know, a lot of money if you bring in a package.” – Employee, Equipment, and design

5.4. Future expectations – 5 years

Stakeholders offered a range of examples of current AI uses that may become more prevalent in the next five years. One stakeholder reported that an AI seal toy was already being used to help people with learning disabilities to express themselves and to support their emotional intelligence. Other stakeholders felt this could become more widespread, and may also be adapted for other uses, for example, robot assistants to support vulnerable people or those with long-term health conditions to live independently or to help combat isolation and loneliness.

AI could genuinely be used to enable [someone with dementia] to be independent, prompting for any sort of memory-related stuff. It could enhance lives in that respect. And as our population is ageing, we are seeing greater and greater instances of dementia.” – Employee, Consulting

However, there was concern about personal data and cyber security if so many smart devices and personal devices were interconnected. These concerns included data leaks, data theft, and the ability of hackers to take control of a home or business.

“[In 5 years] someone can potentially hack into your smart oven … So, I’m going to hack your oven, turn it on or turn your gas hob on, and burn your house down.” – Employee, Tech support

Another example of existing uses potentially becoming more widespread included management and monitoring. For example, one stakeholder expected AI to be writing emails or managing diaries across all industries. They suggested technological improvements would remove current glitches, as well as alleviating current concerns around GDPR and data security, which would facilitate this rollout. However, other stakeholders noted that regulatory concerns would be unlikely to disappear given that technology development (including AI) typically outpaces regulation – legislation and regulation would have to continue evolving.

“Legislation like the GDPR, data protection and what-have-you, will have to evolve as more and more data is used by AI systems … and that has to be government-led. … There has to be strict and enforceable control on companies using AI and how they use and store the data.” – Employee, Consulting

With AI becoming more widespread, there was also an expectation that certain job roles would be more at risk in five years’ time. They felt middle management jobs would be most at risk, as many day-to-day processes could be automated. As industries seek increased profitability, stakeholders said this may lead to a potentially rapid snowball effect on middle management job losses. However, other stakeholders suggested the risk of widespread job losses would be mitigated as employees acquire different skillsets to work alongside AI. For example, skills around validating AI outputs might become more common given how common AI hallucinations are.

Stakeholders also discussed the impact of AI on education in the next five years. Stakeholders expected basic coding skills to become the norm at schools.

“We’ll get to a point where everyone will be able to speak the language of C# or Python and everyone will be able to code to some degree, and that will just be the norm.” – Employee, Tech support

One stakeholder suggested that AI might indirectly help to improve some long-established grading processes. For example, the increased use of AI among students and the risk of plagiarism may encourage universities to adapt more discursive grading practices, in order to encourage students’ ability to think critically.

5.5. Future expectations – 10 years

The 10-year timeframe was more difficult for stakeholders to imagine in terms of AI skills. To some extent, stakeholders felt that AI would be commonplace in day-to-day life, and people would have developed the skills to use it and live with it.

“I feel that the way technology is advanced already over the last 50 years, and how quickly it moves, we’re all going to be forced into that kind of situation where we’re using AI in our homes and our cars, because that’s how things are produced these days, or moving forward that’s how things are going to be produced.” – Employee, Tech support

Stakeholders’ 10-year concerns focused more on the risk and cost of AI technologies, and the impact of assumed job losses on the wider economy. Some stakeholders felt some existing technologies would only become affordable for wider rollout in 10 years’ time. For example, self-driving car technology may become more widespread if more people can afford it. One stakeholder noted this was not just about the cost of the vehicle itself, but also the insurance – if AI is considered safer and more accurate than a human driver, increased insurance premiums for human drivers may encourage greater adoption of driverless vehicles. Other examples that stakeholders suggested could become more affordable in the next 10 years included military-grade implants (such as Neuralink) and bionic limbs.

The impact of job losses was a major concern to stakeholders given its long-term impacts on the wider economy. Several stakeholders suggested that, if there is increased unemployment due to AI, this would damage the market for smart devices and mobile devices, as there would be fewer people able to buy such devices. Furthermore, increased unemployment might also result in less tax revenue, with implications for public funding and investment, for example, in telecommunications infrastructure.

Looking far beyond 10 years, one stakeholder speculated new technologies like AI might radically transform economies and eliminate the need for jobs and earning money.

5.6. Feasibility of public expectations from WPA

As discussed above, stakeholders considered this scenario largely feasible, noting that many of the technologies and uses of AI in the public scenario already exist. The discussion of feasibility therefore revolved around how feasible widespread adoption of AI would be.

The previous section discussed stakeholders’ uncertainty around potential large-scale unemployment due to AI, and its impact on the market for smart devices and mobile devices. Their concern was that unemployed people would be priced out of these devices, meaning the public scenario would only be feasible for a subpart of society.

In addition, one stakeholder noted that current battery technology was a barrier to feasibility for some aspects of the public scenario. For example, since AI is energy-intensive, demand for increasingly powerful AI in personal devices would necessitate improved battery life and charging capabilities.

5.7. Conditions for acceptability

The acceptability of the future scenario depended on several crucial conditions, according to stakeholders.

  • Safeguarding individuals from job losses due to AI: A primary enabler in safeguarding individuals from job losses was the upholding of human rights, particularly the right to work. Stakeholders felt regulatory measures were needed to avoid businesses deploying AI without considering the legal, regulatory, and ethical implications. Having such safeguards in place would ensure implementers would develop their skills in judging the risks and benefits of deploying AI.

“It will be red tape, data protection, [threat of] scammers and the ability to tap into something, or cyber-attacks.” – Employee, Equipment, and design

  • Strengthen redundancy laws: Stakeholders suggested that redundancy laws should be strengthened and rigorously enforced, with mandatory consultations with staff. This would be especially important if the influence of unions were to decrease over time. One stakeholder proposed defining the acceptable level of AI use within job roles, to give clarity and reassurance to employees that they would not be replaced by AI. A suggestion from another stakeholder involved making it compulsory for redundancy packages to include retraining opportunities for affected staff. This approach was seen as a way to mitigate profit-driven layoffs and refocus efforts on employee welfare. However, this stakeholder recognised there were ongoing debates about the fairness and implementation of such measures.

“There has to be some sort of point where someone oversights this and goes, no, we’re not going to allow that to happen because you can’t make people obsolete in that way because otherwise, they haven’t got time to retrain.” – Employer, Internet hosting business

  • Introduce an ‘AI tax’ to encourage staff retention: There was concern among other stakeholders that the measures mentioned above may not suffice to prevent widespread job losses, as societal views on rights could evolve, and technological progress may outpace moral considerations. To counter this, there was a suggestion to impose an ‘AI tax’ on companies using AI, as potential deterrent to cutting jobs. The money from such a tax could also support training and upskilling provision (see also Chapter 6).
  • Ensure access to digital infrastructure is affordable for all: Addressing the accessibility and affordability of digital infrastructure was another key condition for acceptability among stakeholders, particularly for rural communities, lower-income areas, and disadvantaged families. These challenges affect both business and educational opportunities, as the foundation for AI relies heavily on digital connectivity. Stakeholders felt it was crucial to ensure digital personal devices with the latest AI technology remained affordable for schools to avoid excluding disadvantaged individuals from engaging with AI and developing AI skills. In discussions around businesses, while initiatives like the Starlink satellite system [footnote 2] would potentially circumvent ground-based infrastructure challenges, stakeholders noted that industries often concentrate in specific regions, each with varying demands on digital infrastructure and distinct AI applications. Any solutions would therefore need to take geographical and sectoral needs into account.
  • Ensure AI supply chain security: Trust and safety were also paramount concerns, especially regarding the security of supply chains. Incidents, such as the news coverage of exploding pagers and handsets in Lebanon [footnote 3], underscored these worries among stakeholders. While not directly related to AI skills, trust and safety underpin AI adoption for many businesses and so create a foundation for building AI skills.
  • Include a diversity of stakeholders in AI development and AI governance: Stakeholders stressed the importance of effective AI governance, including ethical data usage and bias mitigation in AI models. To develop these, stakeholders advocated for including a diverse range of people, both in terms of demographics (for example, individuals with disabilities) and stakeholder groups (such as businesses and consumers), to ensure inclusivity and fairness at the model training stage. This would raise the level of awareness and skills relating to AI governance and strategy and would help decision-makers balance the demands and needs of different groups.

5.8. AI in smart devices, mobile devices, and telecommunications – scenario

The visual scenario and table below summarise how both the public and stakeholders expect AI in the home and on personal devices to develop as well as the skills necessary to support this. By the end of the discussion, across the next two years participants expected AI devices to improve in their capabilities, enabling people to live more easy lives by freeing up their time. They also expected these tools to become more user friendly, and so not require any more skills than they already have. Similarly, stakeholders expected AI to be increasingly embedded in personal and home devices.

Within five years, participants thought devices could help with energy saving and potentially provide companionship for those suffering from loneliness. However, they were also concerned about the maturity of these devices at this point leading to more scams which are harder to identify. Stakeholders also recognised risks around cyber security and data privacy, but anticipated benefits from AI optimising design and building processes.

Figure 5.1: visual scenario depicting AI in the home / personal devices

Table 5.1: Summary of key themes for AI in the home / personal devices

Summary of key themes
Capability for skills development Importance of technical and non-technical skills: Stakeholders emphasised the need for both technical and non-technical skills to use and interact with AI, such as coding, analytical thinking, adaptability, and communication.

Skills gaps: There is a recognised gap in AI skills, especially concerning data protection and understanding AI’s limitations.
Skills Overview Evaluating AI outputs: Employers and employees alike need skills to evaluate and interpret AI outputs, though their use of AI varies across different roles.

Setting realistic expectations: Understanding prompt engineering and AI’s limitations is crucial to setting realistic expectations.
Opportunity Division of labour between AI and humans: AI is expected to handle basic tasks, allowing humans to focus on high-value tasks, enhancing productivity.

In-house upskilling: In-house upskilling in smaller businesses can increase employee value and adaptation to AI.
Challenge Data security: Data security is a significant concern, with stakeholders wanting to ensure their AI systems are thoroughly tested before rolling out AI more broadly.

Balancing AI use with customer expectations: There is tension between business demands for AI and customers’ preferences for human interaction, for example, the use of chatbots in customer services.
Feasibility Potential unemployment due to AI: Stakeholders see widespread AI adoption as feasible but are concerned about potential large-scale unemployment due to AI affecting markets for smart devices and mobile devices.

Barriers: Battery technology and digital infrastructure access are seen as barriers to AI feasibility.
Policy Impact Safeguarding jobs and access to infrastructure: Safeguarding against job losses and ensuring digital infrastructure accessibility are critical conditions for AI’s acceptability.

Suggestions: Proposals include strengthening redundancy laws, introducing an ‘AI tax’ to support training and upskilling, and ensuring diverse stakeholder inclusion in AI governance.

6. AI in Leisure / Entertainment

6.1. Overview

 Key Findings

  • Stakeholders recognised the benefits that AI could bring to the leisure and entertainment space through streamlining tasks like administration and planning for office-based roles, as well as concept development (e.g. movies and TV).
  • There were concerns about job displacement, particularly for creative roles, with stakeholders worried about copyright, intellectual property, and data/identity protection. They recognised the present lack of clear legal frameworks and skills gaps in understanding AI’s legal and ethical implications as further challenges.
  • In two years, existing trends, such as AI integration in fitness and the decline of cinema were expected to continue. Businesses were anticipated to increasingly use AI for cost reduction and efficiency improvements, assuming workers develop necessary skills like prompt engineering.
  • The 5-year expectation was that development of AI legislation and legal frameworks to protect content and artists could be anticipated, though this was considered optimistic given the lag between adoption and technological advancement.
  • Uncertainty surrounds the long-term impact of AI on leisure and entertainment, with some predicting the full replacement of certain artistic roles, and others a potential ‘tech burnout’ and return to more human interaction. The need for consumers to develop skills in detecting AI-generated content and ‘AI-washing’ [footnote 4] was highlighted.
  • Policy implications included the need for robust legislation and regulation regarding copyright, IP, fair pay, and ethical AI use. Protecting workers’ rights, particularly in creative industries, and supporting older and more vulnerable workers through training and protecting jobs was crucial.

6.2. Current perspectives on AI in leisure / entertainment

Stakeholders’ organisations were already using AI to streamline various tasks, such as administration, planning, and management, and recognised the potential benefits of AI in enhancing background research and concept development. There was surprise at the finding from the employer survey (WPC) that 6 in 10 employers were not adopting or planning to adopt AI given its potential for efficiencies in business processes.

However, stakeholders noted that job replacement was a real concern in the leisure and entertainment industry. They also emphasised that organisations should use AI to support their workers rather than replace them. In this respect, some stakeholders felt comforted that the general public shared their concerns about the impact of AI on the creative industries, particularly around job losses and the reduction in quality and creativity.

“I saw comfort then that the general public thinks a lot of the same things that I do as well about my role maybe getting diluted by [AI] and other people coming in and being able to do what on the surface looks like a much better piece of work. But actually, they haven’t actually done anything. You know, AI has done it for them.” – Employee, Art, and design

According to stakeholders, the loss of creative roles, such as musicians, writers, authors, designers, actors and voice artists, has been apparent for several decades. However, they suggested AI either exacerbates these issues or poses a new type of threat in being able to completely replace such roles. While stakeholders felt that AI cannot generate anything genuinely new or deal with nuance or genres like comedy, they were concerned that market pressures would nonetheless favour cheaper, lower-quality AI outputs. One stakeholder cited the example of cinema, where film studios often opt for lower-risk investments such as sequels, franchises and adaptations, over original content. Another stakeholder suggested consumers were also becoming more willing to accept generic AI outputs, if it meant faster and cheaper supply of content.

However, the concern around job losses also extended to other roles, not necessarily specific to the leisure and entertainment industry. For example, one employee recognised the benefits of AI for project management, in terms of managing timetables, tasks and communications, but noted that a large component of project management was people management, even if this was not explicit in the job description. This employee felt there was a risk that employers or organisations could replace project managers with AI and inadvertently lose an important human aspect of such roles.

“It’s your core project manager who’s dealing with all those departments, … balancing out their egos, balancing out all the conflicts that come up as well. And that’s 90% of the job. But on paper what you see written down is the systems and processes and the analytical things that we do.” – Employee, Film and TV events producer

In addition to concerns around job losses, stakeholders in the leisure and entertainment industry were also concerned about copyright, intellectual property, and data / identity protection for those working in creative roles. For example, one employee working in graphic design reported that AI image generators already generated designs like their own and were unclear whether AI had been trained on their designs (among others) or not. They felt they were potentially missing out on royalties or work but also felt unable to check how AI had produced its output. At the same time, stakeholders reported this concern was also a barrier to them using AI in supporting their own work, as they were concerned about potentially infringing copyright of another artist. Other stakeholders spoke more generally about the risk to musicians and actors having their voice or appearance ‘stolen’ by AI and used to generate commercial content. As a result, stakeholders called for strong regulation to protect those working in the creative industries from exploitation, as well as protecting artistic spaces.

Stakeholders flagged a number of AI skills currently in demand in their industry. Key skills around interacting with AI, especially prompt engineering and prompt writing, were noted in fields like graphic design. This was an important skill for using AI effectively, but also for building confidence and greater awareness of the limitations of AI. Skills in understanding how to use AI responsibly and safely were also considered crucial, for example, knowing what types or information can and cannot be used in Large Language Models, and being able to evaluate AI outputs, such as spotting AI hallucinations.

“Probably the most useful thing is being able to spot [AI] hallucinations.” – Employer, Film and TV owner / director

However, stakeholders also felt a number of key AI skills were missing or in short supply, for example, skills to understand the legalities and rights of AI outputs, which they largely attributed to ambiguous existing legal frameworks. Stakeholders also felt there was a lack of skills to understand how to implement AI responsibly.

“The key thing that we found when educating our employees is around how to use it safely, the types of information that they might have access to that we do not want to go into large language models, … and the context in which it’s appropriate to use it as well.” – Employer, Images

One employee noted there was a tradition in fields like art and fashion to rely on young people and new starters to fulfil roles like archiving or gallery supervising, which were considered ‘rites of passage’ or traditional entry points into these fields. If AI replaced these roles, these entry points would need rethinking.

6.3. Future expectations - 2 years

When asked about expectations for the future, stakeholders noted that many of the points raised in the public dialogue were already underway and would likely continue in the next two years.

Stakeholders noted that the fitness industry had already started integrating AI, such as to analyse personal running data or nutrition. One stakeholder noted that this was putting some fitness clubs at risk of replacement from personal AI trainers. However, they suggested fitness clubs may have to develop skills to integrate AI into their offer, reflecting clients’ changing demands and expectations.

“There’s already stuff like [AI] personal trainers. It’s putting [fitness clubs] into jeopardy, but I need to stop looking at it from such a negative viewpoint because I did actually think about using that sort of thing and it would be really beneficial for the individual.” – Employee, Music orchestra operations manager

Stakeholders also discussed the ‘death of cinema’. They felt this was more strongly linked to the rise of streaming services than AI, but they noted that AI facilitated lower-quality art (not only in film, but other art forms as well) and so was making the situation worse.

However, there was recognition that AI may have immediate benefits, such as reducing costs and improving efficiency for businesses. For example, some businesses might use AI to bring advertising and marketing in-house, or to enhance concept generation, while others might use it to automate processes. These were also viewed as continuations of trends that are currently underway. This assumed that workers would have developed the necessary skills to use such AI tools at work, particularly skills like prompt engineering.

“For things like HR, finance, all kinds of processes that every business has, which are often done at the moment by email or spreadsheets or that kind of thing, AI is perfect, and we’re using AI to automate those processes and to assist with that work. And that’s not specific to the business that we’re in. It’s not specific to leisure, it’s every single company.” – Employer, Images

6.4. Future expectations - 5 years

When considering the 5-year timeframe, stakeholders expected legislation and legal frameworks to be more prominent. They felt that having such frameworks in place in five years was optimistic, given the lag between technological development and legislation, but they anticipated that skills around AI legal matters would be more developed (both among governments and industry). Stakeholders suggested the lack of certainty around legislation and legal frameworks was a key barrier to adoption of AI. Therefore, the development of these skills over the next five years would be crucial to widespread AI adoption. One stakeholder expected – at the time of interview – this to be driven by the EU, with US and UK markets following the EU’s lead. Therefore, they believed a key skill required in the UK would be understanding the developments of EU AI legislation and frameworks so the UK could keep up to date.

“Legislation and frameworks always lag behind innovation. I think anything that’s less than 5 years is probably optimistic. It’s going to be, in my view, the Europeans that are going to solve this first through the frameworks they’re currently working through, and then it will filter down to North America and UK as we follow their lead.” – Employer, Images

In addition, stakeholders expected basic use of AI to be part of the curriculum in primary and secondary schools within the next five years, reflecting the anticipated wider implementation of AI and the time taken to change school curricula. There was some surprise at the finding from the WP3 ‘Rapid Evidence Review’ that the UK lagged behind North America and parts of Asia in this respect. According to stakeholders, education would include not only interacting with AI, but also how to use it safely and responsibly.

In commercial settings, stakeholders expected AI would impact TV, film and streaming in various ways. For example, one stakeholder expected AI to change the funding models of streaming services by allowing personalised product placement in programming based on viewers’ search histories. Another stakeholder suggested AI would enhance CGI so that it could be used more easily alongside actors or gamers.

“One of the things with more personalised content, which I think might happen, is product placements, where tv shows and films have sections where product placement can go, and depending on who’s watching it, AI decides what posters are in the background, what drinks people are drinking, foods people eating.” – Employer, Film and TV owner / director

6.5. Future expectations - 10 years

When thinking about the 10-year timeframe, once again, stakeholders were more uncertain about what the use of AI might look like, and the skills that would be needed.

Some stakeholders suggested that, despite the benefits of humans working alongside AIAI may fully replace certain artistic roles within the next 10 years. In such cases, stakeholders predicted an increase in content (such as for programming, advertising, and video games), but a loss of creativity, with AI expected not to be able to generate anything genuinely new. One stakeholder speculated that some businesses may even use AI to generate ‘workers’ with diverse characteristics and use this to misleadingly claim improved gender or ethnic diversity. This suggested that consumers would need the skills to detect the use of AI to avoid being misled by ‘AI-washing’ and to distinguish original from generated content (unless companies were compelled to declare use of AI or include a watermark).

On the other hand, some stakeholders felt there could be ‘tech burnout’, a rejection or limiting of AI use, and a return to more human interaction. These stakeholders suggested this would be a development of trends in some areas such as dating and group fitness, where the focus is increasingly on in-person interaction rather than digital interaction or reliance on algorithms.

“You can really see the rise [of human connection] going on in the dating industry at the moment, … apps like Thursday, where the app only opens on a Thursday, and then you can just go in and the rest of the time it’s all built on in-person dating. Running clubs are also seeing a bigger rise.” – Employee, Film and TV events producer

6.6. Feasibility of public expectations from WPA

Stakeholders noted that many uses of AI in the scenario developed in the public dialogue already existed but could become more widespread over the next 10 years. Many were comforted that the public expressed similar concerns about job losses and risk posed by AI to those in the creative industries.

Stakeholders also felt five years was an optimistic timeframe for AI legislation and frameworks. However, there was widespread agreement that AI legislation was key to business confidence in adopting trustworthy AI and would therefore underlie the feasibility of many aspects in the public dialogue scenario. Stakeholders felt that legislators needed to develop their knowledge and understanding of AI, and the industry should contribute to shaping legislation, particularly around safe, responsible, and ethical use of AI.

“The lack of legislation and frameworks is preventing adoption, particularly within a commercial setting, because there’s potential legal risks and exposure that people aren’t aware of at the moment.” – Employer, Images

While recognising that AI legislation would be challenging given the breadth of AI technologies and potential applications, one stakeholder suggested certain aspects of AI may become easier to audit and regulate. This stakeholder expected that Large Language Models would tend to become more specialised over time, as they become increasingly refined for particular settings. They felt this specialisation may make it easier to train models ethically, as developers would have to be more discerning about the training data, and thus make the model easier to audit and regulate.

At the same time, there was scepticism that AI would become more ethical over time, with market pressures being the main barrier. For example, these stakeholders did not think there was a commercial incentive for using AI to improve artist rights to ownership of their work, even though AI technology could in principle help to detect plagiarism more effectively.

“We have clients actually asking us to use AI for music, for adverts and films and things. They know that it’s cheap. We don’t need to hire a composer, we don’t need to record it, we don’t need to get somebody to perform, which involves licensing royalties, and [clients] know it exists and they want that to keep budget down.” – Employee, Music orchestra operations manager

Similar to stakeholders in the smart devices sector, some stakeholders suggested that a tax on companies that use AI could create monetary support for the creative industries threatened by AI.

“I really like the idea of a centralised fund, or essentially a tax on AI companies. That would then go into a centralised creative support pot that could then be used to further the creative industries.” – Employer, Images

The final point raised by stakeholders was around education and training, with some suggesting it would be most useful to focus education and training on early years and young people. These age groups would have more time to build skills and could then disseminate their knowledge to older generations at home and in the workplace. One stakeholder reflected that this seemed to work effectively with other technologies, such as mobile phones and social media, where parents and grandparents might ask their children or grandchildren about how to use new technologies, rather than using more formal training resources.

6.7. Conditions for acceptability

The key conditions for acceptability, according to stakeholders in the leisure and entertainment industry, involved greater knowledge about the advantages and disadvantages of using AI, greater understanding its limitations, and robust legislation and regulation around copyright, IP, fair pay, and fair use.

  • Protect the rights of workers in creative industries: Stakeholders highlighted the need for government to protect physical artistic spaces to ensure physical, human interactions remained a part of the creative industries. They also suggested trade unions would have a crucial role to play in protecting worker rights against the detrimental impacts of ‘democratising creativity’. The main concern was that the market would be flooded with low-quality, cheap, AI-generated outputs, making it harder for those in the creative industries to make a living. However, even if artists and AI existed alongside one another, stakeholders emphasised the importance of skills to distinguish between AI and non-AI outputs in terms of originality, diversity, and quality.

“All of a sudden, if every Tom, Dick and Harry can do creative stuff by using AI to do it, then it starts to flood the market and becomes less and less interesting, which then leads to this loss of creativity.” – Employee, Film reviewer

  • Support for older or more vulnerable workers: There was widespread concern among stakeholders about job losses among older people or more vulnerable groups, who may be less likely to use new technologies or find it more difficult to adapt.
  • Establish frameworks on responsible and ethical use of AI: A key recommendation for policymakers was the development of industry standards and a list of approved training courses, akin to those provided by the National Cyber Security Centre for cyber security, aimed at reducing barriers and enhancing awareness of AI. Additionally, there was a call for increased clarity surrounding the code of ethics governing AI, including detailing who is responsible for regulation, how it is monitored, and whose perspectives were incorporated.

“If there is something like that [for AI] that can be pushed out, then that would massively help. It would mean that as an employer, I don’t have to tender and go around looking at private suppliers … It would lower the barriers, raise education across the board, and these skills would then be transferable because you would have done them at a very general level rather than a company-specific level.” – Employer, Images

  • Focus education and training on general transferable skills to boost confidence in adapting to AI: Stakeholders suggested an educational shift towards fostering general transferable skills, such as critical thinking, across the population so that people could feel more confident in using AI and evaluating its outputs. However, they also identified organisations as playing a critical role in informing all employee levels about AI development, promoting its safe and responsible use, and offering necessary training to staff.
  • Encourage global cooperation on AI governance: Recognising that AI is not confined to the UK, stakeholders felt there was a need for a global governing body for AI, whether convened by governments or independent entities, to oversee legislative, regulatory, and ethical frameworks for AI development.

6.8. AI in leisure and entertainment – scenario

The visual scenario and table below summarise how both the public and stakeholders expect AI in leisure and entertainment to develop as well as the skills necessary to support this. Over the next two years, the public expects AI to be increasingly integrated into the creation and delivery of media and entertainment. This shift raises concerns about the impact on human creatives’ ability to compete, thereby threatening creative skills. Copyright and intellectual property are major concerns among stakeholders, both in terms of artists’ rights being ignored by AI systems and users, but also as a barrier to adoption among artists themselves.

Over the next five years, the public anticipates AI will significantly impact the leisure and entertainment sector, generating a wide range of content and potentially leading to a divergence between cheaper AI-made media and more expensive human-created content. While this ‘democratisation’ of creativity may lower barriers to content creation, it could also devalue professional creative skills, saturate the market with AI-generated content, and erode creative abilities due to overreliance on AI tools. Stakeholders expressed similar concerns but noted that consumers may accept lower quality outputs and come to expect more personalised content over time.

Finally, over the ten-year horizon, participants anticipated a scenario in which increasingly sophisticated AI-generated content blurs the lines between human and machine-made media. This could heighten the need for robust regulation and transparency measures to keep pace with AI’s rapid advancements. Stakeholders thought this was a possibility, but also felt that consumers may feel ‘tech burnout’ and start to reject or limit AI use, returning to more in-person leisure and entertainment options.

Figure 6.1: Visual scenario depicting AI in leisure and entertainment

Table 6.1: Summary of key themes for AI in leisure / entertainment

Summary of key themes
Capability for skills development Interacting with AI: Stakeholders emphasise the importance of skills to interact with AI, including prompt engineering and understanding AI’s limitations.

Safe and responsible use: There is a growing demand for skills to use AI responsibly and safely, particularly given concerns around job losses in creative roles.

Legal understanding: The need for an understanding of legalities and rights regarding AI outputs is highlighted, pointing to gaps in current legal frameworks, particularly around copyright and intellectual property.
Skills Overview Prompt engineering: Skills in prompt engineering are crucial for effective AI use, particularly in design phases.

Understanding legal frameworks: Understanding the legalities of AI outputs is seen as a priority due to ambiguous existing legal frameworks.

Quality assurance: There is a focus on developing skills to evaluate AI outputs, such as spotting AI hallucinations.

Responsible AI: Stakeholders note a lack of skills to implement AI responsibly, especially concerning data use in Large Language Models or generative AI.
Opportunity Efficiency and effectiveness: AI integration into leisure and entertainment can enhance business efficiency and cost-effectiveness.

AI integration already underway: Various leisure and entertainment areas are already seeing AI integration, indicating potential for broader adoption and skills development.

Automation: AI can assist in automating processes in business functions across all sectors.

Enhancing content: Stakeholders see AI’s potential for enhancing CGI and personalised content in TV and film.
Challenge Job replacement concerns: Job replacement is a significant concern, with fears of AI diluting creative roles and replacing project management tasks.

Barriers to adoption: Copyright, intellectual property, and data protection present barriers to AI adoption in creative industries.

AI developments outpace legal frameworks: Rapid AI developments outpace legislative frameworks, creating legal risks for artists or flooding the market with AI-generated content that outcompetes human-generated content.

AI-washing’: The risk of ‘AI-washing’ and market pressure for cheaper outputs challenges creative industry integrity.
Feasibility Legal protections: Stakeholders agree on the necessity of AI legislation for widespread adoption, seeing it as a key safeguard for artists and consumers.

Establishing responsible AI frameworks and governance: Developing AI expertise among legislators and industry players is crucial for developing responsible AI frameworks and governance.

Auditing AI models: Stakeholders see potential for Large Language Models to become more specialised and easier to regulate and audit.

Ethics in the face of market pressures: Scepticism remains about AI becoming more ethical due to market pressures.
Policy Impact Protecting rights: Protection of creative industry rights is essential, with trade unions expected to play a key role.

Raising awareness of ethical AI: Frameworks for ethical AI use and approved training courses are needed to lower barriers and raise awareness.

Transferable skills: Education focused on general transferable skills is recommended to boost confidence in adapting to AI.

Global cooperation: Global cooperation on AI governance is necessary to ensure fairness and consistency across markets.

7. AI in professional services

7.1. Overview

 Key findings

  • AI offers significant potential for improved efficiency, productivity, and profitability in professional services. Stakeholders valued its ability to automate data-heavy tasks, enhance research and creative processes, tailor learning materials, and support business planning.
  • Ethical concerns around AI implementation, including ‘AI-washing’, were highlighted. Risks of losing key skills and over-relying on AI due to rapid implementation without proper consideration were noted. Skill gaps exist in using, understanding, and evaluating AI, including prompt engineering and quality assurance.
  • In two years, stakeholders anticipated the increased development and introduction of AI solutions by businesses of all sizes. Improved prompt engineering skills and growth in training programs were expected.
  • Within five years, skills gaps at senior levels due to over-reliance on AI by younger professionals and slow uptake by more experienced professionals were anticipated. Changes in recruitment practices, such as less reliance on traditional CVs and cover letters, were also expected. The need for technical skills to train AI systems, understand AI’s limitations, explain AI to clients, and replicate AI’s decisions were highlighted.
  • Across the 10-year horizon, AI was expected to be further embedded in business management, strategy, and recruitment, with stakeholders noting the importance of balancing human management with AI use.
  • Key implications for policy included the need for ethical AI implementation guidelines to mitigate AI risks related to data security and protect vulnerable groups. Promoting upskilling and training, particularly for older generations, was considered crucial.

7.2. Current perspectives on AI in professional services

‘Professional services’ covers those working in areas such as accounting, finance, law, consultancy, and information technology. Stakeholders in these areas were enthusiastic about the potential for AI to improve efficiency and productivity but emphasised the importance of ethical and considered implementation of AI.

Stakeholders expressed shock at the finding from WPC (the Employer Survey) that 6 in 10 businesses were not using or planning to use AI and highlighted a range of current uses for AI in their businesses, as well as benefits to business efficiency and productivity as a result of using AI. Examples included:

  • using AI to automate tasks involving large volumes of data
  • using AI-powered search functions to find and summarise relevant information at speed
  • using text-to-design tools to streamline creative processes
  • using AI to tailor learning and development materials
  • using AI to support early-stage business planning, draft reports and write code.

In addition to improved efficiency and productivity, some stakeholders reported increased profits and a competitive advantage. Stakeholders emphasised the benefits of AI in the early stages of various tasks, but felt people should still play a key role at the later stages, especially relating to quality assurance. Others noted that they would be open to using AI more often but lacked the knowledge to exploit more advanced functionality.

“I’m not using [AI] to its full capability where it can automate workflows and build out all these complicated things on cloud infrastructure. I’m not doing any of that. I don’t know anything about it.” – Employee, IT/tech

However, stakeholders also identified risks around the ethical implementation of AI and its impact on the professional services workforce. For example, one stakeholder considered ‘AI-washing’ (where businesses might overstate their use of AI for marketing purposes without genuine AI integration) a challenge for the overall sector given its potential to mislead customers and negatively impact customer perceptions around the quality of AI-enabled services. There were also concerns that, if AI was implemented too quickly, without proper consideration, businesses and the sector overall would be at risk of losing key skills and over-relying on AI.

“I think that’s the key thing with AI: you need to know what the expected result is. If you throw a document into it and it summarises it very quickly, you go, ‘oh, yeah, that’s the summary then’. No, it’s not. It’s what AI thinks the summary is. Unless you’ve read that document and fully understand it, you don’t know whether [the summary] is right or wrong.” – Employer, CEO

The key current skills that stakeholders identified for their sector revolved around using, understanding, and evaluating AI. Prompt engineering was a core skill for interacting with generative AI systems and being able to generate the desired output, while quality assurance skills were crucial for being able to critically evaluate or validate the outputs. Skills around understanding AI ranged from technical skills (such as programming or coding AI for use in their business) to more non-technical or client-facing skills (such as being able to understand the broad differences between different types of AI, understanding the appropriateness of using AI in different contexts, or being able to explain the use of AI and its outputs to clients and customers).

7.3. Future expectations - 2 years

Over the next two years, stakeholders anticipated an increasing number of businesses, including smaller enterprises, developing, and introducing AI solutions. They expected prompt engineering skills to improve with time and familiarity, although some suggested this skill might become less critical as AI systems become better at interpreting prompts and recognising a wider range of voices and accents.

Knowing how to use AI effectively was seen as a key skill for the future. Stakeholders expected a growth in training programmes and the development of in-house AI tools to support this upskilling.

“It’s important when people with IT skills and that sort of expertise are training non-specialists in the use of [AI], that there’s sort of a go-between so that somebody who knows what the actual job is and how [AI] would adapt to that job.” – Employer, CEO

Stakeholders also predicted the continued commoditisation of AI, with increased accessibility for SMEs to AI technologies and products. However, there was both uncertainty and interest in how AI adoption would vary by business size, age of user and confidence, with an expectation that older business owners may be less likely to adopt AI solutions due to lack of familiarity and confidence with using AI. Furthermore, stakeholders suggested businesses might need different AI skills depending on their growth ambitions. For example, those adopting AI to pursue high growth may require more technical, innovation skills, while those adopting AI to build capacity may focus on non-technical skills around the appropriateness of using AI.

7.4. Future expectations - 5 years

Across the 5-year timeframe, stakeholders expected the potential for significant revenue increases due to greater efficiency and lower fixed costs enabled by AI, although this was coupled with concerns about potential layoffs and a shrinking workforce in professional services.

“If a client is willing to pay significantly less, have the work product delivered significantly faster, and then accept there’s a 5 to 10% risk of error here [compared to a manual review], we’re willing to take that risk.” – Employer, CEO

They also raised concerns about skills gaps emerging at senior levels as experienced professionals retire and younger generations become more reliant on AI. They suggested an overreliance on AI might lead to “laziness” among younger professionals, for example, one employee working in pensions suggested younger professionals were relying on AI for generating pension calculations, but not taking the time to understand how those calculations were done. They felt this in turn might impact on job progression and the development of senior-level skills in the sector, for example, being able to explain decisions or validate AI outputs.

“Within the actuarial world there are so many fine details that you need to be able to understand yourself and so many calculations, regulations, things like that. And we’ve seen it in the newer grads that are coming in, they don’t have the want to understand what’s happening in the calculations.” – Employee, Business manager

The impact of AI on recruitment practices was also discussed. Stakeholders noted that use of AI was increasing among candidates for writing cover letters and CVs, and recruiters for filtering candidates. Those involved in recruitment suggested the risk of hiring underqualified candidates who used AI at graduate level would be mitigated by using in-person evaluations and assessment centres. There was also some uncertainty about the impact of AI on recruitment of more experienced hires, where assessments centres are typically not used.

“But with experienced hires, if they were using AI, I’m not quite sure how we would mitigate that because we don’t have assessment centres for experienced hires.” – Employee, Business manager

As a result, stakeholders suggested that the widespread use of AI may lead to an overhaul of recruitment practices in their sector, such as less reliance on traditional cover letters and CVs, and greater use of video interviews. Some stakeholders suggested AI could also play a greater role in analysing video interviews, helping to mitigate interviewer bias, but recognising this would largely depend on how the AI was set up. These were findings that also emerged in WPA ‘the public dialogue’.

“I’m finding that at the moment things like cover letters are already falling by the wayside. There will be other ways of doing recruitment.” – Employee, Solicitor

In terms of skills, stakeholders expected these to be an extension of current skills, reflecting the expected spread of AI in the sector. Key skills that were identified included technical skills to train AI systems for specific uses and purposes within professional services; enhanced understanding of how AI works, AI’s limitations and the sources and validity of AI inputs and outputs; improved ability to explain AI to clients and customers, especially around addressing concerns and building trust; and retaining the skills to be able to replicate decisions made by AI. This last point was highlighted as a particular challenge by one stakeholder, who felt that AI’s strength often lies in being able to analyse large volumes of data where transparent decision-making is inherently more difficult, for example, the thousands of decisions made by AI for a self-driving vehicle.

“The bit where humans find it difficult to understand is exactly where AI is strongest. It’s figuring out those patterns and doing predictions based on a massive amount of data that humans would just find it unbelievably difficult to do. Essentially, that’s what the AI going into vehicles is doing.” – Employer, CEO of management services

Finally, although stakeholders felt AI would be more commonly used to develop training qualifications, they felt skills to deliver in-person training would remain essential in the 5-year timeframe.

Stakeholders also suggested that AI may become able to train and tailor itself, although there was a risk of AI amplifying its own biases, with potential detrimental impacts on fairness, transparency, and diversity over time.

“In 2 to 5 years, there will come a point where AI is training itself.” – Employee, IT

7.5. Future expectations - 10 years

Once again, stakeholders found it difficult to imagine the situation in 10 years’ time, but they expected certain current trends to continue. First, they expected the continued commoditisation of AI, leading to cheaper AI technologies and greater access to AI among SMEs. This in turn would lead to greater adoption of AI and more on-the-job learning, building workers’ AI skills through practice and familiarity.

Second, they anticipated that AI would be increasingly common in social care, for example, helping people to live independently or tackling loneliness or grief (with the latter referred to as ‘death tech’). Being a sensitive area, skills around AI strategy and governance would become more developed as implementers better understand the uses and limitations of AI in care settings.

Third, they expected AI to be increasingly embedded in businesses’ approaches to management, strategy, and recruitment. For example, one employer felt there was potential for AI to enhance business forecasting, such as predicting productivity increases if individuals undertook certain types of training. Another employer suggested AI might be used to manage recruitment and work output overseas, or to manage investments and stocks. In these cases, business leaders would need to develop skills to balance people management (in terms of their training, recruitment, and productivity) with use of AI, which may include better understanding how AI reaches its decisions.

7.6. Feasibility of public expectations from WPA

Overall, stakeholders felt the elements in the scenario from the public dialogue were realistic, with feasible timings.

The feasibility of customer service and recruitment elements would depend on the balance between business needs and customer or candidate expectations. For example, stakeholders noted that chatbots were efficient for businesses to address and triage incoming queries, but were not popular with customers, echoing sentiments expressed by stakeholders working in smart devices, mobile devices, and telecommunications (see chapter 5).

“I feel like AI needs to improve, particularly from the customer service perspective, because now when I get an AI chatbot, I basically just write, please transfer me to a human. Unless you’ve got a very basic query, it’s just not going to help.” – Employee, Lawyer

In terms of recruitment, stakeholders felt the use of AI in writing CVs and cover letters would become more widespread but may in turn cause a shift in how recruitment is carried out, with less emphasis on cover letters and CVs, and more emphasis on assessment centres.

In addition, stakeholders emphasised the core skills of prompt engineering and being able to evaluate AI outputs as underlying the successful use of AI in professional services.

7.7. Conditions for acceptability

Stakeholders felt future scenarios involving AI should involve AI working alongside people, recognising the strengths and limitations of both.

  • Protect the rights of workers: A common expectation was that trade unions would play a crucial role in safeguarding worker rights amidst AI developments. Stakeholders highlighted that some workers would have to retain the necessary skills to intervene effectively should AI systems fail or malfunction.
  • Establish protocols to address data safety and privacy concerns while using AI: Concerns surrounding data privacy regulations like GDPR and the proliferation of misinformation and disinformation, particularly in sectors such as law, health and research, were key concerns. Stakeholders expected the risks of deepfakes and data leaks to pose significant threats to public sector organisations like the NHS and education institutions. Addressing these challenges would be crucial conditions for acceptability. One stakeholder noted that the banking sector had established protocols for anonymising personal data in decision-making processes, thereby mitigating the risk of data leaks. They suggested this could serve as a model for other sectors.
  • Develop specific ethical frameworks for the various uses of AI: Wider consensus on the ethics of AI was another key condition for acceptability, although there was scepticism about the feasibility of a singular ethical framework effectively addressing the diverse range of AI technologies and applications.

AI is a huge church. There are so many things that get covered by that, from flying planes, to calculating whether you’re going to die in an actuarial office, to helping you train, and dealing with bereavement. Is it possible to put an ethical framework around that? Can we define what that looks like?” – Employer, CEO of management services

These frameworks would help to govern the deployment of AI in different sectors, particularly sensitive sectors like defence, where stakeholders felt that unethical deployment of AI could potentially and rapidly facilitate the emergence of ‘big brother’ surveillance states.

“The use of AI in the defence sector is actually a very scary thing because that actually moves really, really, really quickly. And international law never catches up with that.” – Employer, CEO of management services

  • Upskill the workforce, in particular older generations: Stakeholders highlighted the importance of upskilling the entire workforce to understand AI, its associated risks, and ways to use it effectively. They suggested this upskilling should target older generations, as they felt younger generations were more likely to upskill via formal education or personal use of AI-enabled technologies.
  • Government taking the lead in mitigating AI risks: In terms of the role of government, stakeholders felt government leadership was most important in mitigating AI risks, particularly in areas like data security, protection of vulnerable groups, and the impact of AI on the public sector.

7.8. AI in professional services – scenario

The visual scenario and table below summarise how both the public and stakeholders expect AI in work and careers to develop as well as the skills necessary to support this. Over the next two years, participants expected more companies to begin to use AI in hiring processes, especially for initial sifts and first round stages. They were concerned about the level of bias that might be perpetuated by the AI systems involved and also how this could increase the gap between those who use and those who do not use AI already. Stakeholders noted that AI was already quite common in recruitment, and that overuse by applicants may trigger a change in recruitment practices, with a greater emphasis on in-person interviews and assessment centres. They also noted that AI would increasingly take care of basic processes and management tasks, potentially threatening the traditional roles of middle managers.

Across the next five years, some participants expected there to be a major shift in the job market, and the potential decline of the recruitment industry. They felt that a human should still be in charge to have the final say over hiring and envisaged the potential use of AI to check fairness in the AI systems which are used in hiring. The main skill they expected to be linked to this development was critical thinking, both on the part of the hiring manager to recognise if an applicant has used AI, and on the part of the candidate to navigate a potential AI test involved in a hiring process. Critical thinking would also be a key skill according to stakeholders, particularly helping business leaders to consider the appropriateness of using AI and the types of data that should and should not feed into AI models.

Within 10 years, participants felt that regulation around data processing may occur. They were hopeful that AI would be used as a tool which works alongside not instead of people. However, they saw that customer service jobs would be in significant decline by this point, with AI likely to replace them. Stakeholders also anticipated greater AI integration in various professions, including those outside the professional service areas they were in.

Figure 7.1: Visual scenario depicting AI in work / career

Table 7:1: Summary of key themes for AI in work and careers

Summary of key themes
Capability for skills development Interacting with AI: Stakeholders emphasise the need for both technical and non-technical skills for interacting with AI, including prompt engineering, and understanding AI’s ethical implications.

Quality assurance: Skills to critically evaluate AI outputs and ensure quality assurance are crucial for maintaining accuracy and quality.

Appropriate use of AI: Emphasis is placed on understanding the appropriateness of AI in different contexts and explaining its use to clients.
Skills Overview Viewing AI from different perspectives: Understanding AI from a technical and client-facing perspective is essential to using AI appropriately.

Transparency and accountability: Skills to replicate and validate AI decisions are increasingly important for maintaining transparency and accountability.

Explaining AI: There’s a need for skills to explain AI operations to clients, building trust and addressing concerns.
Opportunity Efficiency and productivity: AI can significantly improve efficiency and productivity in professional services, with the potential for increased profits and competitive advantages by integrating AI.

Decreasing costs: Stakeholders expect the cost of AI technologies to decrease over time, leading to greater AI adoption, especially among SMEs.

Transforming recruitment: AI may enhance recruitment processes, reducing biases and improving candidate evaluations.
Challenge Overreliance on AI: Rapid AI adoption could lead to an overreliance on AI, ‘laziness’ among younger generations of workers, and a loss of key human-centred skills.

AI developments outpace legal frameworks: AI developments outpace legislative frameworks, posing legal and ethical risks.
Feasibility Building trust and confidence: Stakeholders see AI adoption as realistic, with the need for legislation to build trust and confidence to achieve this.

Balancing AI use and customer expectations: The balance between AI efficiency and customer expectations is crucial for ensuring a smooth transition to more AI-enabled working practices.
Policy Impact Protecting worker rights: Protecting worker rights is essential, with trade unions expected to play a key role.

Data security and privacy: Data security and privacy protocols must address concerns like GDPR and misinformation.

Ethical AI frameworks for different contexts: Ethical frameworks for AI use are needed, though challenging to implement universally given the range of AI applications.

Role of government: Government leadership is vital in mitigating AI risks, particularly for vulnerable groups and data security.

8. AI in Education

8.1. Overview

 Key findings

  • Stakeholders were unsure about the role of AI in education, recognising many benefits it could offer for teachers in terms of freeing up time, but also several challenges to implementing it and concerns surrounding data protection and the key skills development of children. They did not believe that teachers currently have adequate skills, such as technological skills and critical thinking, to be able to use AI and identify when it is being used.
  • Across a 2–5-year horizon, stakeholders expected that AI would start to become a helpful tool for teachers, but also that protections would have to be put in place by this time to ensure AI is not unleashed in education without control. They expected that training would be implemented within this time to enable teachers to develop the necessary skills to incorporate teaching into AI.
  • Within 10 years, stakeholders expected that the e-safety curriculum would be expanded to include specific AI-related content but expressed concern that this timeframe is already too long.
  • Policy implications include the need to train teachers on using AI but also ensure that AI is implemented in a safe and standardised manner in schools.

8.2. Current perspectives on AI in education

Stakeholder perceptions of AI use in education were varied, as they both recognised the benefits it offers, but also accepted the existence of several issues and challenges related to incorporating AI in schools. Currently, the types of AI used in education by teachers involved automated subtitles, translation, and idea generation. More specifically, educators discussed using AI for summarising work and providing students with a creative output to work from, whilst students have been found to use it for submissions. Most stakeholders referenced the need to check work provided now, to ensure that work is written by students themselves and not using AI. One stakeholder mentioned how they have turned this issue into a teaching point, encouraging students to critically think rather than rely on the outputs generated from AI.

“I’ve used it myself to challenge students. So, to generate material and say, okay, what is this missing? What can you teach the AI?” – Employee, Schools/education

In particular, stakeholders referenced how AI can help provide the starting point for a project or area of work. They felt positively about using AI for idea generation, enabling staff to come up with concepts for work. They saw this as AI performing the role of a personal assistant to the teacher, rather than replacing their role entirely. The benefits of this use were recognised to be time saving and efficiency, whilst also enabling staff to be creative with their lesson plans.

“That would be across brainstorming, sort of idea generation, drafting, redrafting… we would use generative AI as a personal assistant” – Employer, Schools/education.

Overall, educators presented a low level of confidence in their ability to use AI at work due to a perceived lack of skills. They felt inadequately equipped to use AI as they had not received enough training or guidance so far on using AI safely and appropriately within the education setting. Stakeholders discussed their concern that use of AI by students might undermine students’ skills development, as AI would not require the use of the same skills that traditional teaching would. However, they also recognised that using AI could in turn demand that staff and students develop critical thinking and analysis skills. This was considered to be important so that staff and students alike are able to judge the accuracy of AI outputs and determine if they are to be trusted.

“We need to get the skills and understand what it can do well and what it can do badly.” – Employee, Research and development

Other issues raised related to the implementation of AI at schools were related to operational constraints. Policies of e-safety and GDPR were seen to slow down the introduction of new technology at school, because they ensure that protocols are in place which limit the risk to GDPR violations. As such, stakeholders expected that this process would be relatively delayed, as the schools and authorities cannot keep up with the rapid pace of AI development enough to ensure policies are put in place efficiently.

E-safety also raised issues for stakeholders in a more practical sense, finding it a large risk that students lack AI safety skills. As AI safety is not currently on the curriculum, students are not being taught how to use AI formally in a safe and appropriate manner. Stakeholders shared concern with ensuring that AI would be used responsibly by staff and students alike, and that this issue may persist as AI develops and people struggle to keep up.

“Like e-safety, we’ll have to kind of be catching up with ourselves to kind of make sure that we’re using it responsibly. The children know how to use it. And then as there are new developments, we’re just a bit on the back foot and that might stay that way for a little bit.” – Employer, Schools/education

Curriculum changes were also discussed, with stakeholders considering whether students should be encouraged to employ AI in their work when appropriate, depending on their age range. Teachers generally agreed that younger age groups should be exposed to AI less, but that it was more acceptable to incorporate AI into teaching for older generations in schools and further education institutions, and in turn permit students to use AI in certain circumstances. However, this potential transformation of the teaching landscape called into question the issue of assessments, and whether they should be adapted to ensure that students are being tested for their skills and understanding, not their ability to use AI. For example, this could include changing formats to oral exams, where students would speak about the work they’ve done to show their understanding.

Overall, opinions about the use of AI in education were situational, with educators finding it to be a useful tool, but potentially also a hindrance, and a risk to students’ skills development.

8.3. Future expectations - 2 years

Despite being unsure about the potential negative implications of incorporating AI into teaching, stakeholders expected that the use of AI here would increase within two years. In the next two years, stakeholders in the education industry anticipate AI primarily assisting with administrative tasks. This includes streamlining data input, generating reports, and potentially even tailoring support for individual student needs by, for example, automatically creating practice questions. Teachers recognised that AI could free up their time, allowing them to focus more on other aspects of the curriculum such as the creative and interpersonal aspects of teaching. They did not envisage that this would require the development of any new skills but instead draw from elements of basic digital literacy.

“One of the benefits of AI, is freeing up time. Making our lives so much easier in a way. And we could probably focus and concentrate on other things in the classroom.” – Employee, Schools/education

However, concerns linger about the potential for students to misuse AI for academic dishonesty and the need for robust detection methods. Despite seeing AI as useful for generating information and automating certain tasks, educators felt that the current education system would need to be reshaped immediately to ensure that AI misuse does not occur. This would include providing teachers with a formalised system, combined with the analytical skills themselves to detect when students are using AI. This is a system which is already established in some schools, but formalising the process whereby teachers can carry out these checks was considered a priority.

8.4. Future expectations - 5 years

Looking ahead to five years’ time, the expectation was for a more widespread integration of AI into educational tools and practices. This includes AI-powered learning platforms that offer personalised learning experiences, automated analysis of student work to identify areas for improvement, and AI-driven language translation tools to support students with diverse linguistic backgrounds.

However, concerns about ethical considerations, data privacy, and the potential for AI to exacerbate existing inequalities in education were raised. Teachers called for government-led initiatives to provide training and resources to ensure all students and educators have equitable access to AI and the skills to use it effectively, as well as to ensure that AI is used safely and appropriately within schools.

Stakeholders also believed that, within this 5-year timeframe, issues related to misuse of AI, potentially to cause harm or misrepresent information, should be dealt with as a priority. They highlighted the importance of being proactive about ensuring that protections are put in place so that AI is used responsibly. They did however recognise that this process would not be simple and that the subject is a complicated matter to resolve effectively.

“If nothing is done with, within a 2, 3, 4-year period, it’s going to be very hard to put the genie back in the bottle… there’s always going to be people trying to misuse things, but at least trying to put some sort of guardrails on the side of some of these things does seem like a worthwhile endeavour.” – Employer, Director of NGO

Across the 5-year period, therefore, stakeholders prioritised the need to ‘get ahead’ in terms of AI use at schools, to ensure that it is being used in an appropriate way by staff and students and that privacy and security remains the strongest priority.

8.5. Future expectations - 10 years

A decade from now, stakeholders envisioned a future where AI is deeply embedded within the education system, though not replacing teachers entirely. The focus shifts from one of AI suspicion towards one of teaching students crucial skills for navigating an AI-driven world. They placed emphasis particularly on critical thinking, digital literacy, and the ability to discern credible information from misinformation. Concerns about the potential impact of AI on human interaction, the loss of creativity, and widening the socioeconomic gap were persistent themes when discussing this time frame.

Stakeholders called for ongoing dialogue and collaboration between educators, policymakers, and technology developers to ensure AI’s responsible and ethical implementation in education, with a focus on fostering human connection and well-being alongside technological advancement.

Alongside this, educators highlighted the importance of incorporating AI into the online safety curriculum. Despite recognising this process as crucial, and something which is likely to be introduced relatively soon, they were sceptical about how quickly this could occur and felt that 10 years would be a realistic time frame for this.

“The paperwork might be in place for it, but I can’t actually see it being implemented for 10 years.”– Employer, Schools/education

Overall, stakeholders struggled to envisage what may occur in education related to AI in 10 years. However, a central focus of their discussions was placed on ensuring that AI would not become the central component of teaching, but instead a tool which should be used responsibly by staff and students in schools.

8.6. Feasibility of public expectations from WPA

Whilst stakeholders agreed with many aspects of the public scenario, one key area they disagreed with was the transformation of teaching entirely, and the potential introduction of AI teachers. They felt that this would not be a situation that the public or those in the education system would accept, due to the array of different duties and responsibilities that teachers carry out which go beyond basic learning. Additionally, they felt that the cost of introducing AI teachers would be far too high to justify within the 10-year timeline.

AI teachers will not be the norm in 10 years. We can’t even afford pencils. They’re not going to put money into having AI robots in schools. I think it will be down the line… it’ll be way after I’ve retired.” – Employee, Schools/education

Teachers did not expect that the developments envisioned by the public with respect to AI would be universal. Instead, they saw that AI would be incorporated more in the teaching of adults and older children, with nursery and primary school requiring a different level of teaching and emotional intelligence more attuned to current teaching practices.

“If it [AI] was going to come in, it probably more likely with adults first rather than lower down… I don’t see at its current how it’s going now how that would actually work, because you need a different skill set for the younger ones, definitely.” – Employee, Responsible AI (Curriculum and Course Development)

Therefore, as with other industries, stakeholders had a lack of confidence in the public’s anticipation for AI in education, particularly related to the speed with which developments could be expected within the education industry, due to financial constraints and limited teacher time for training.

8.7. Conditions for acceptability

Stakeholders felt that to accept the use of AI in education, they required for human input to be incorporated at each stage, and that AI remains ethically used:

  • Frameworks should be applied to ensure AI is ethically implemented: AI tools must be developed and implemented ethically, addressing concerns about data privacy, bias, and potential misuse for academic dishonesty. Transparency in how AI systems work and how data is used will be crucial to build trust within the education system.
  • Ensuring equitable access and training for all students and educators: regardless of background or location, all students and staff should have equitable access to AI technology and the training needed to use it effectively. This requires government support and funding to bridge the digital divide and ensure inclusivity and that teachers have adequate time to train.
  • Human skills should remain the priority: While AI can automate tasks and personalise learning, it is crucial to prioritise the development of essential human skills, such as critical thinking, creativity, communication, and social-emotional intelligence. AI should complement, not replace, human interaction and guidance in the learning process.
  • There should be continuous evaluation and adaptation of how AI is being used in education: This involves regularly assessing the impact of AI tools on student learning, teacher workload, and overall educational equity, making adjustments as needed to ensure positive outcomes.

8.8. AI in education – scenario

The visual scenario and table below summarise how both the public and stakeholders expect AI in education to develop as well as the skills necessary to support this. Over the next two years, the public expects AI to offer more personalised learning experiences, particularly in self-directed learning. However, according to stakeholders, while AI could help teachers plan lessons and students with homework, practical barriers such as financial costs and technological considerations may hinder its implementation in schools.

In the 5-year horizon, the public anticipates that AI could increase the “digitalisation” of learning, potentially at the expense of socialisation and critical thinking skills. There are concerns that attempts to reduce costs in schools could lead to teachers being replaced by AI-powered learning tools, which could be detrimental to student learning and welfare. Additionally, AI may disrupt job markets, necessitating the upskilling and reskilling of adults, and potentially leading to a refocusing of the curriculum around new and upcoming work streams. Stakeholders were more concerned by the justification for using AI in education, considering that this timeframe is where people should be questioning not whether it is possible to implement AI in education, but whether it is suitable and can be integrated equitably.

Looking ahead to the ten-year horizon, the public expects AI-powered learning to be more integrated into the classroom environment, potentially replacing teachers and posing risks to social skill development. Some envision extreme scenarios where each student has their own screen and AI-generated lessons, with personalised learning plans. However, AI could also create opportunities for lifelong learning, particularly as the population ages. Stakeholders instead felt that AI would not replace teachers, and that instead teachers would need to change and be reskilled in response to the job market.

Figure 8.1: Visual scenario depicting AI in education

Table 8.1: Summary of key themes for AI in education

Summary of key themes
Capability for skills development Critical and creative thinking: AI risks reducing the incentive to critically evaluate or think in creative ways. Ensuring AI is used as a tool to improve and personalise learning, not replace it.

Potential skills loss: Avoiding over-reliance on AI and ensuring that AI does not hinder the development of traditional skills amongst students.

Adaptability: Need for both teachers and students to learn skills as AI is introduced in education and making sure these are embedded in curriculums.

Safety skills: Ensuring that teachers and students are developing the skills to safely use AI.
Skills Overview High capability in self-directed learning: Self-directed learning platforms already integrating AI for personalised experiences, and flexibility in AI allows for flexibility in learning.

Lower capability in formal education: Schools and universities beginning to utilise AI to help plan lessons and provide student support, but ability to use AI in the classrooms limited by practicality and cost.

Use of AI for assignments: Students are using AI to complete assignments, prompting concerns about academic integrity and the need for advanced detection methods.
Opportunity Efficiency for educators: AI can assist teachers with tasks like lesson planning, grading, and providing targeted student support, helping to free up teachers’ time for creative and interpersonal aspects of teaching.

Personalised learning experiences: AI-powered platforms can offer personalised learning experiences and identify areas for student improvement.

Inclusion: AI-powered tools can better accommodate diverse learning styles and needs, including for those with disabilities. AI can also support students with diverse linguistic backgrounds due to its translation tools.
Challenge Inequality: Potential for increased educational inequality if advanced AI tools limited to premium/paid options.

Academic dishonesty: AI potentially enables cheating, allowing students to move up through the education system without developing key skills.

Potential to exacerbate inequalities: Inequal access to AI tools may mean that regional and socioeconomic divides are heightened at schools, with those who have less able access to technology falling behind.

Need to update curriculum: Stakeholders expressed the importance of ensuring that e-safety elements of the curriculum be updated to explicitly teach children about how to use AI responsibly.

Data privacy concerns: Stakeholders were worried about the potential risks to data privacy that could be incurred if AI was fully implemented in schools.
Feasibility Financial constraints: Perceived limited funding attributed to AI development in education was seen to pose challenges to rapid AI integration.

Limited teacher training time: Stakeholders did not believe that there would be adequate time to train teachers to both use AI for their own work and be able to teach the appropriate use of AI to their students.

Consideration of learning ages: AI integration is expected to be more feasible with and appropriate for older students and adults, rather than with younger children.
Policy Impact Equitable access: Ensuring equal access to AI-powered educational tools between income groups, especially in primary and secondary schools. Call for the government to take the lead here on providing training and resources for equitable AI access.

Safe and standardised implementation in schools: Policies should ensure that AI is introduced holistically across the UK education system, and that this is carried out in a safe way.

Curricula amends: Adapting curricula to ensure that e-safety includes AI-specific content.

9. AI in travel / transport

9.1. Overview

 Key findings

  • Stakeholders believed that AI could help to improve safety in transport, providing a tool to enable vehicle operators to perform their roles even more reliably. However, they were concerned about their ability to fully trust AI systems to function in the transport industry without control.
  • Stakeholders were sceptical of public expectations within this industry, finding several of the changes envisaged by the public to be unlikely or unfeasible with the 2- or 5-year timeframe. They did not believe that many skills would need to be developed across this time frame stakeholders but recognised that stakeholders may need to be incorporate using AI systems into their work and so develop basic technological skills to do so.
  • Across the 10-year timeline, there was significant concern around the potential skills losses which might occur in the industry, as everyone will be relying on AI to carry out tasks for them. Another large concern was related to the potential job losses which might occur due to AI replacing certain roles like customer service assistants and transport operators.
  • Policy implications related primarily to ensuring that those working in this industry are upskilled or reskilled based on whether their job is at risk or whether they need to use AI to perform their role.

9.2. Current perspectives on AI in travel / transport

Stakeholders had strong opinions on the influence of AI on travel and transport, being largely concerned about security and their ability to trust AI systems. Some did recognise that AI could help to improve safety of transport, where a computer could be more reliable than a human and so reduce the risk of human error. They raised the need for balance between AI and human input, finding that AI can be helpful to assist a train driver, for example, to provide more information and identify obstacles they may not be able to see. As long as drivers are provided with the training to use AI competently, they saw this as a positive transformation of the industry.

“It’s almost upskilling the driver without taking the control away from them.” – Employer, Strategic planning manager

Another positive they recognised was the ability of AI to increase convenience, by saving time for the public, for example through transport apps selected an optimised route, or having automated ticket barriers and ID checks. Stakeholders also believed AI measures would enable staff to save time, by removing administrative burdens.

Despite recognising the benefits of using AI in travel, stakeholders did express concern that increased use of AI risks losing skills in certain professions, with AI carrying out tasks people would typically carry out themselves. As certain staff, such as customer service representatives and vehicle operators, would no longer be required to perform certain tasks, stakeholders expressed concern that skills levels would decrease among employees in this industry. Therefore, the issue was raised of the potential for an over-reliance on AI leadings to the loss of core skills.

“I don’t employ mechanics anymore, I employ fitters. You know, guys who literally unplug that, plug that in on the computer, let the computer do it. That’s not a mechanic. They don’t know how things work. And to me, it’s dangerous.” – Employer, Taxi / fleet hire / servicing

A further stakeholder concern related to skills losses was the potential for loss of jobs in this industry, with AI replacing the need for staff in many of the roles they currently fulfil, such as those in customer service positions, or those operating transport vehicles. In particular, stakeholders discussed how older generations working in this industry may struggle to adapt to using AI in their career.

“If you’re an employee, there’s the fear that you’re going to be replaced, isn’t there? I would imagine this in the back of most people’s minds, that’s what’s going.” – Employee, Delivery driver

However, despite the level of concern presented by stakeholders, employers reflected the power of the public to push back on these job risks. Stating how they valued the importance of having these roles performed by humans rather than AI, they reinforced their commitment to carry on using staff such as coach drivers and tour guides. They felt that it was unfair to these employees and their families to lose a job they depend on due to AI, which they are trained for. Others also valued the input of tour guides as local who can provide generational and cultural knowledge in a far richer sense than a computerised system could.

“I’m going to carry on using these people as long as I can. Otherwise, what are they going to do. They’re trained, you know” – Employer, Travel and tourism

Stakeholders also raised how valuable human interactions can be in the travel and transport industry. Maintaining social interaction for those who do not have a strong social network, such as friends and family, was an important consideration when switching ticketing counters to AI-powered machines. Some of these human and real-life interactions form part of the holiday experience, such as collecting paper boarding passes from a holiday company, and this was considered an element of the experience that some people value. Stakeholders felt that AI’s lack of emotional intelligence means it cannot provide the same level of problem solving as human-to-human conversation. In their opinion, humans are more valuable than AI as they are better at responding to and dealing with issues and frustrated customers. As such, they called for balance between the use of AI and the level of human input in the transport industry.

“They actually really enjoy coming. And that is part of the holiday experience for them. And we’d be taking that away from them if they just had to go straight to the airport. They like that interaction before they go.” – Employee, Travel consultant

Overall, there was some hesitancy over the use of AI in travel and transport. Stakeholders recognised the potential for making transport safer, but also the risk of loss of jobs, and with these skills, and losing the human interaction involved in this as a service industry.

9.3. Future expectations - 2 and 5 years

Stakeholders did not envisage specific developments across the 2- or 5-year timeline separate to those raised by the public in their discussions. They did, however, have reflections on elements of the public scenario related to feasibility and likelihood. Several elements of the public scenario were contentious amongst stakeholders as they did not believe they were likely to occur. These discussions are outlined in section 9.5.

9.4. Future expectations - 10 years

Stakeholders raised concern with how AI might lead to a loss of skills for staff in travel over this timeline, with AI carrying out tasks for people instead of them doing the tasks themselves. They felt that skills levels across employees in the travel industry would decrease across the next 10 years due to an increasingly lesser need for them. As discussed in the 5-year timeframe, problem solving was once again considered to be a benefit offered by human influence over AI. As people will be relying heavily on AI, there will be fewer people with the skillsets needed to rectify an issue should something go wrong with the computer system. Stakeholders felt that ensuring these skillsets are not lost would remain a necessity as they considered that a human would be more able to identify a problem and fix it than a computer.

“In 10 years’ time, there’ll be a lot less people with the skills [to interpret a graph] because everyone will be relying so heavily on the AI.” – Employer, Taxi / fleet hire / servicing

Reflecting on the use of AI in air travel, stakeholders discussed that it would eventually be implemented here, to aid communication between air traffic control and aircraft, or to be incorporated into aircraft controls themselves. However, they felt that this would not be a development that would occur soon, but instead across a longer timeframe.

“We don’t really use AI at all to be honest yet. But you know, I know for a fact in 10, 15, 20 years’ time it will be a huge part.” – Employee, Transport driver

As such, there was an appreciation that developments in the transport industry over the next 10 years will not occur universally across all sectors. While in the space of transportation apps and certain public transports, stakeholders expected to see significant change across the next 10 years, for air travel and driverless cars, they felt that we were further from seeing any visible change.

9.5. Feasibility of public expectations from WPA

Stakeholders felt that the scenario envisaged by the public was very optimistic. They expected that the developments listed in the public scenario would be of too high expense, despite them being technologically possible. Another issue raised was that the public might not simply go along with new technological developments, and instead yearn for a return to the human element. As such, they felt that certain aspects of the public scenario, such as driverless buses and taxis, would not be likely.

“I don’t think anyone’s going to stump up that kind of cash, so I don’t think it’s going to happen in 5 years” – Employer, Strategic planning manager

Reflecting on the feasibility of travel opportunities to change vastly across the next 10 years, with the implementation of AI into holiday planning and scheduling, stakeholders felt that this would become an issue. They discussed how these experiences were unique and individual and were concerned about the depersonalisation of travel by using AI-powered technology.

“That personalised part of it is just going to go completely… Computer systems don’t have feelings… I just feel that that’s really going to be something that’s going to be a real issue in the future.” – Employee, Travel consultant

Generally, the need for human involvement within both the travel and transport industries was maintained as a key value for stakeholders.

9.6. Conditions for acceptability

Stakeholders focused on ensuring a balance between the use of AI and the use of human employees in the travel and transport industry:

  • AI to be an assistant to human workers: AI should not be overtaking jobs but instead a helpful tool which can be used to drivers or customer service workers, to increase safety or efficiency.
  • Human interaction to remain a possibility for customers: Stakeholders felt it was important to ensure that customers had the option to speak to a human when using a transport service as this human interaction could be very valuable to some.
  • AI systems need to be demonstrably reliable and accurate: Stakeholders highlighted concerns about AI providing incorrect information, leading to frustration and distrust in transport applications and other mapping software. They will only accept AI within their industry if it consistently proves to be a reliable source of information and can perform tasks accurately.
  • Upskilling and reskilling the workforce should be a key priority: The introduction of AI will require employees to adapt and acquire new skills to work effectively using these technologies. Stakeholders require that there is investment in training and development programs to ensure that employees are equipped with skills to thrive in a changing work environment or reskilled to move into other roles or industries.
  • The potential costs should be evaluated to ensure that using AI is a worthwhile endeavour: Stakeholders discussed how most developments within this scenario would be very costly, and so the trade-off between putting as much money into this as possible and whether it would actually be a worthwhile endeavour for the transport industry, considering the loss of the human element.

9.7. AI in travel / transport – scenario

The visual scenario and table below summarise how the public and stakeholders expect AI in travel and transport to develop as well as the skills necessary to support this. Within a 2-year period, participants did not expect to see many drastic changes to the travel and transport industry. They recognised that AI-powered apps would become more advanced in their capabilities and that airport security processes would become more streamlined. Stakeholders reflected that these changes were likely but mourned the potential loss of the human element of travel and transport that could occur if these developments took place.

Across the next five years, some participants discussed the possibility of driverless buses, trams and tubes coming into existence, with the need for reskilling to reduce job losses here. In terms of skills, they felt that the general public would not need to develop any new skills, and that instead it would be the developers that would advance in their capabilities. Within this timeline, stakeholders disagreed strongly with the public’s expectations. They did not foresee that driverless buses, trams, and tubes would be prevalent by this point and instead reflected on how technology is not advanced enough for this yet, and that there is the need for public trust in these new inventions. Rushing Automated Vehicles (AVs) to market was a significant concern, with stakeholders feeling that the roads would only be safe if all cars were Avs.

By 10 years’ time, participants envisioned that AI could revolutionise the holiday market, by introducing the possibility of virtual travel allowing for cheaper experiences. However, they were clear about the need to ensure that AI would be introduced to complement the transport industry, rather than replace existing jobs and systems. Stakeholders agreed with the public that changes should not be made at risk to jobs and systems or attempt to overhaul the system. They also considered how there may not be the funding to finance these expected technological developments, even if the technology could be developed within 10 years.

Figure 9.1: Visual scenario depicting AI in travel and transport

Table 9.1: Summary of key themes for AI in travel / transport

Summary of key themes
Capability for skills development AI as a useful tool for workers: AI could enhance skills for transport operators, enabling them to perform tasks more reliably with AI assistance.

Potential for skills decline: Concerns exist about declining skills among workers due to over-reliance on AI, particularly mechanical skills and problem-solving.
Skills Overview Current use of AI is limited: Stakeholders did not recognise many skills specifically related to AI involved in their industry currently.

Reskilling employees: Stakeholders highlighted the need for upskilling and reskilling programs for employees to adapt to AI integration and potentially new roles.
Opportunity Improvements to safety: AI can improve transport safety by reducing human error and providing operators with more information.

Potential for an elevated travel experience: Stakeholders recognised the potential value provided by AI-powered apps in terms of enhancing travel planning and potentially offering virtual travel experiences.

Increasing accessibility and speed: These developments may enable those with accessibility requirements to get around easier and more independently. AI can streamline processes like airport security and potentially automate certain transport operations.
Challenge Job displacement concerns: Stakeholders worried that AI would replace several roles in this industry, particularly including customer service roles and transport operators.

Transformed customer experience: Risk of decreased human interaction in travel and transport, impacting customer experience and social connections.

Industry acceptance: Prevention of strikes / uprising from those that work in this space by meaningful engagement and retraining opportunities.

Potential for over-reliance on AI: Potential for over-reliance on AI could lead to skill degradation and difficulties in handling system failures, if everything is automated.
Feasibility Public overly optimistic: Stakeholders view public expectations as optimistic, citing high costs and potential public resistance to some AI applications. Driverless vehicles like buses and taxis are considered unlikely in the short term due to cost and feasibility concerns.

Unequal development across sectors: Implementation of AI in air travel and driverless cars is expected to take longer than other transport sectors.
Policy Impact Potential job losses in these industries: Employees affected by AI integration will need upskilling or reskilling.

Building public trust: Ensuring AI reliability and accuracy was recognised to be crucial for building public trust and acceptance of AI in transport.

Ensuring the human influence is not removed: Balancing AI implementation with maintaining the offer of human interaction in transport services was a key policy consideration raised by stakeholders.

10. AI in healthcare

10.1. Overview

 Key findings

  • Stakeholders expect AI to have the potential to revolutionise healthcare by streamlining administrative tasks, improving diagnostic accuracy and speed, personalising treatment plans, and enabling remote patient monitoring. Significant concerns exist around data security and patient privacy, potential job displacement (especially for administrative staff) and the risk of dehumanising healthcare by reducing human interaction. Stakeholders did not believe that healthcare professionals currently have the skills necessary, such as basic digital literacy and critical thinking, to use AI in healthcare.
  • Within two years, AI is expected to be implemented primarily for streamlining administrative tasks, such as data entry and appointment scheduling, and potentially assisting with preliminary patient assessments. However, widespread implementation within two years is viewed with scepticism due to system complexities and funding constraints.
  • In five years, stakeholders expect AI to become more integrated in healthcare, focusing on diagnostics, personalised medicine, and remote patient monitoring and predicting faster, more accurate diagnoses and improved treatment outcomes.
  • Over 10 years, a significantly transformed healthcare landscape is anticipated, with AI playing a central role in disease prediction, prevention, and management through sophisticated smart wearables and remote monitoring tools.
  • Policy implications include the need for robust ethical guidelines, regulations, training programs, and a phased implementation approach to ensure responsible and equitable AI use.

10.2. Current perspectives on AI in healthcare

Some participants were very positive about the role of AI in healthcare and saw it as a necessary transition for the industry. They believed that AI would be able to revolutionise current systems, which would help to free up their time to allow them to provide better care for patients. Similarly to other topic areas, participants felt that AI could have a significant impact on healthcare by reducing staff time taken to carry out tasks.

“There’s a lot of stuff that I do that I don’t want to do that takes up a lot of time, so we can get AI to do that. It’s going to be great.” – Employee, Healthcare worker

Healthcare professionals felt that the skills needed to use AI in healthcare would be different to those that healthcare professionals already have. As such, and similarly to in other industries, they argued for the need to address that older generations of professionals would require more guidance with using AI in their roles than younger generations who likely would have had AI incorporated into their training qualifications.

“The doctors that work with it have to have different set of skills to the ones I have, and I have to learn” – Employer, Practice manager

However, alongside this optimism, there was a significant degree of caution and concern. Data security and patient privacy are paramount concerns, with stakeholders questioning the safety and ethical implications of entrusting sensitive medical information to AI systems. There was also anxiety about the potential for job displacement, particularly among administrative and support staff, as AI is expected to take over routine tasks. The fear of dehumanising healthcare and losing the crucial element of human interaction and empathy was also a recurring theme, which was also raised by the public in WPA, who found that AI cannot offer the sense of connection found between people. This coincided with a significant level of concern amongst stakeholders for those who are not technologically competent being able to access healthcare, should AI be incorporated into booking systems and other aspects of healthcare.

“We work with a lot of residents who are digitally excluded, so they don’t even have the competence or the skills to interact with a computer or a laptop or whatever we’re talking about.” – Employer, Safeguarding manager

Many stakeholders emphasised their desire for a balanced approach, where AI is used to augment and support healthcare professionals, not replace them entirely. They called for robust ethical guidelines, regulations, and training programs to ensure responsible and equitable use of AI in healthcare. As such, the emphasis of this workshop was on leveraging AI’s strengths while preserving the human touch and ensuring that patient care remains at the forefront of all technological advancements.

10.3. Future expectations - 2 years

In the next two years, healthcare stakeholders anticipate that AI could be used primarily to assist with streamlining administrative tasks and improving efficiency. This includes automating data entry, optimising appointment scheduling, and potentially even assisting with preliminary patient assessments through chatbots or symptom checkers. The focus was on freeing up healthcare professionals’ time for more complex and patient-centred tasks, and it was felt that this shift could occur over the next two years.

Yet, there was scepticism about whether this process would actually be feasible within a 2-year timeframe due to the convoluted nature of administration within hospitals and how this differs across trusts and regions. Some stakeholders believed that these changes were already happening, whereas others could not foresee it in the near future, evidencing the existing regional divide in AI implementation in healthcare.

“I cannot imagine within 2 years that we could have streamlined our records because GPs use different records to what the hospital used, to what community use.” – Employee, Perinatal mental health practitioner

Stakeholders did not expect these changes to require many new skills on the part of healthcare professionals, but instead a basic level of digital proficiency. They discussed how even this skillset is not currently held by some healthcare professionals, and so that addressing this initial skills gap would be the first step to introducing AI into healthcare.

10.4. Future expectations - 5 years

Looking ahead five years, stakeholders envision the possibility for a more integrated role for AI in healthcare, particularly in diagnostics, personalised medicine, and remote patient monitoring. AI-powered tools are expected to assist in analysing medical images, identifying potential diagnoses, and tailoring treatment plans based on individual patient data. Their hope was that AI could lead to faster and more accurate diagnoses, improved treatment outcomes, and more efficient use of healthcare resources. Further to the initial level of skills discussed as essential in the 2-year timeline, they envisioned the need for staff to be able to give AI the right input to ensure they get an output they are happy with. This could involve staff developing the ability for prompt engineering which was raised by other industries.

Ethical considerations surrounding data privacy, algorithmic bias, and the potential for job displacement were significant concerns across this timeline. Stakeholders were more positive about 2-year changes and less about those which may occur in five years’ time. As they looked further ahead, they felt there was more chance of complexities and challenges arising.

“The 2 years are all positive and great… and then we start going down the line… This is where things might go wrong, and this is where we might have to ask questions.” – Employer, Practice manager

Generally, stakeholders valued the benefits of AI and felt that it would be within the 5-year timeline that they could start to reap the rewards of AI being implemented into the healthcare system. They discussed the necessity of maintaining a new skillset designed for using AI in the day-to-day working life. As they saw AI to develop and adapt across the next five years, they also thought that the skills they would need to use it would change. They did struggle, however, to identify exactly what these skills may be. Expecting that AI will continue to become more useful; they therefore saw it as an opportunity which could be beneficial within this 5-year timeline.

“It will continue to evolve, will continue to be embedded into everyone’s industry, continue to become smarter, more responsive, and spread it even further as well. So, I think that that skill set is here to stay, and I do think we need to move with the times to make sure that we can utilise it in our day to day working and maximise the opportunity as well.” – Employer, Manufacture / sales of pharmaceuticals and biotechnology

Alongside this consideration, they felt that the level of bureaucracy existent in this industry would slow down progress in relation to the incorporation of AI. They considered that the amount of ‘red tape’ would mean that changes that could be envisioned in two years, such as using AI to streamline processes, would take 5 years or longer.

10.5. Future expectations - 10 years

Across a longer timeframe, stakeholders anticipate a healthcare landscape significantly transformed by AIAI-powered systems were expected to play a more central role in disease prediction, prevention, and management, with smart wearables and remote monitoring tools becoming increasingly sophisticated. The focus shifts towards a more proactive and preventative approach to healthcare, with AI assisting in identifying individuals at risk for certain conditions and providing personalised recommendations for lifestyle changes and early interventions. AI was perceived to help the trust save money over this longer timeframe, with stakeholders expecting that it could save significant amounts of staff time which is currently spent on administrative and routine tasks.

Since stakeholders maintained that human oversight and connection remains at the centre of healthcare provision, key skills they could expect to be important were critical thinking and adaptability. Staff would be required to seamlessly navigate between working with AI systems to communicating with other staff and patients, and so able to recognise the appropriate uses of AI and incorporate this into their job role. As this is more of a behaviour skills change, stakeholders saw how it could take a significant amount of time for this to materialise.

“A lot of the women I work with are older women and I do see how it would be quite difficult for them to adapt to this new way of working.” – Employee, Occupational therapist

However, concerns persist about the potential for over-reliance on AI, the dehumanisation of healthcare, and the need for robust ethical guidelines and regulations to ensure responsible and equitable use of AI in healthcare. Additionally, stakeholders felt that even a 10-year timeline might be too optimistic for developments to be made, due to the need to secure the level of funding necessary.

“I think we’re looking at a generation or a lifetime really, to get this money” – Employee, Perinatal mental health practitioner

As such, there remains scepticism amongst stakeholders about how likely it is that transformational change will occur in the healthcare industry within 10 years. They foresaw the lack of resources, particularly funding for implementing AI and staff time for training on using AI, to be the biggest blockers to progress in this instance.

10.6. Feasibility of the public expectations from WPA

Stakeholders were unsure that the changes the public foresaw would be as quick as the public expected because of external constraints. For instance, they felt that it would be impossible to have records streamlined in the NHS within two years. Instead, they felt that this process would take at least 10 years due to the complexity of the task but also restrictions such as resource in terms of funding and staff time to implement new procedures. As such, the optimism of the public in terms of the use of AI in healthcare was not fully reflected in stakeholders within this industry.

“These time limits and the milestones that are set there are unrealistic because you actually don’t know what’s going to happen. And one change might be so significant that will move everything around.” – Employer, Safeguarding manager

Conversely, stakeholders reflected that some of the elements of the public scenario are already happening or in the process of being introduced, especially those involved in the 2-year scenario. Despite the consideration that restructuring processes using AI might be a large undertaking which would require longer than two years, simpler systemic updates were considered to be more realistic. Stakeholders recognised that AI would be able to help with streamlining records and ensuring patient notes and appointments are effectively recorded within this timescale, providing that funding is provided.

“I think 2 years is definitely a realistic timeline, but I think it just all comes down to funding as to whether that would be a realistic timeframe, because my trust doesn’t even want to fund Power BI licences.” – Employee, Analyst

Across the 5-year timescale stakeholders agreed with the public that this is when issues that may start to arise due to AI use in healthcare would be visible. Once AI has been rolled out in some respect in healthcare, such as for streamlining records, stakeholders believed that its potential impacts would be noticeable within five years. They reflected that this would be a big concern as these kinds of risks may not be visible until AI systems are already set up, and so the problem has already emerged.

“So, you’re talking about wider inequality, ethical risks of having more data out there. These are the things that you’re not going to be able to know until the system’s been in place for some time” – Employer, Practice manager

Within the ten-year timeframe, stakeholders took issue with public expectations over shifting perceptions of healthcare professionals. Whilst they recognised that changing people’s minds about AI would be a challenging undertaking, they expected that this long timeframe might risk the capability of healthcare institutions to effectively implement AI. As such, stakeholders raised this potential timeline as a key concern for healthcare going forward.

Perspectives on the public scenario overall were varied, with some aspects seen as likely to happen but over a longer time period, and others recognised to be already in progress.

10.7. Conditions for acceptability

Several key aspects were considered necessary for stakeholders to accept the role of AI in healthcare, aligning with other industry priorities related to ensuring the maintenance of a human impact and control, and eliminating the chance for inequality to be worsened:

  • Regardless of the level of AI usage, always prioritise human interaction and empathy: AI should primarily augment healthcare tasks, not replace human-centred care. The consensus is that emotional support, rapport-building, and nuanced understanding of patient needs remain crucial and cannot be replicated by AI.
  • Ensure that AI is implemented equitably, ensuring accessibility for all: Widespread AI implementation hinges on equitable access to technology and digital literacy. Concerns were raised about vulnerable populations, including the elderly and those in deprived communities, who may lack the necessary skills or resources to engage with AI-driven healthcare.
  • Patient data should be kept secured and private: Given the sensitive nature of healthcare data, participants emphasised the need for stringent data protection measures. Concerns about potential breaches, hacking, and misuse of personal information by private companies require robust safeguards and transparent data governance.
  • Staff to be provided with continuous and comprehensive training and upskilling: Healthcare professionals require extensive training to effectively utilise AI tools and adapt to evolving roles. This includes not only technical proficiency but also ethical considerations, data interpretation, and understanding AI limitations.
  • AI should be implemented in phases and evaluated continuously: Rushing AI implementation could lead to unforeseen consequences and exacerbate existing inequalities. A phased approach, starting with less critical tasks and gradually expanding, allows for continuous evaluation, feedback, and necessary adjustments as potential impacts can be recognised.

10.8. AI in healthcare – scenario

The visual scenario and table below summarise how the public and stakeholders expect AI in healthcare to develop as well as the skills necessary to support this. Within the next two years, the public anticipated AI integration primarily via preventative care apps and improved diagnostics. Healthcare professionals in the stakeholder workshop agreed with the public that they will need training to work effectively alongside AI, while patients would require digital literacy skills to use AI-driven apps and devices. They also believed that AI could be implemented to address backlogs and improve record keeping within this time frame.

Over the 5-year horizon, the public expect AI to enhance efficiency in healthcare administration, advance drug manufacturing, and improve patient outcomes. However, concerns emerge about the potential deskilling of healthcare professionals and job losses due to AI automation. Balancing the pursuit of cost savings and efficiency with the need to maintain human expertise and adaptability will be crucial. Stakeholders recognised that there could be a lot of positive progress made by AI in healthcare across this timeline but warned against generalisation of patient types and the risk of overreliance on AI.

Looking ahead to the ten-year horizon, the public foresees significant integration of AI into advanced areas of healthcare – including robotics and gene editing. However, this raises concerns about reduced human control, empathy, the ethical impact of eugenics, and the exacerbation of inequalities based on access to advanced technology. Stakeholders considered that elements of the 2-year scenario, such as streamlining records, would not take place within two years as the public identified, and instead would be a 10-year task. They did recognise, in line with the public’s view, that advancements within this timescale could be revolutionary for the healthcare industry, particularly with the impact of the diagnostic power of AI.

Figure 10.1: Visual scenario depicting AI in healthcare

Table 10.1: Summary of key themes for AI in healthcare

Summary of key themes
Capability for skills development Upskilling healthcare workers: Healthcare professionals will need new skills to effectively use AI tools and interpret their outputs, with older generations within the workforce potentially requiring more support.

Allowing time for complex tasks: AI could free up healthcare professionals’ time, allowing them to focus on more complex tasks and develop their skills here.

Empathy and human connection: Preserving compassion and personal interaction in patient care by providing healthcare professionals with more time to spend with patients.

Ethical considerations: Addressing privacy, equity, and social implications of AI in healthcare.
Skills Overview Current skills not sufficient: Stakeholders did not perceive their industry to be readily able to incorporate AI due to the current skills levels of staff.

Healthcare professional training pipeline: Healthcare professionals need retraining to work with AI more closely. Focus should be put on continuous training on working with AI systems. Additional emphasis might need to be placed on empathy and connection.

Digital skills necessary for all: Digital literacy skills will be important for patients to engage with AI-driven healthcare apps and devices, as well as for staff to interact with the systems.
Opportunity Potential to revolutionise healthcare: Stakeholders expected AI to have a large potential in healthcare; streamlining administrative tasks, improving diagnostic accuracy and speed, and personalising treatment.

Increased efficiency: Across multiple strands, including logistics, admin, booking and resource allocation.

Remote patient monitoring: AI can enable remote patient monitoring, leading to proactive and preventative healthcare approaches.
Challenge Maintain human connection and empathy: Risk of dehumanising healthcare by reducing human interaction and empathy was a significant concern for stakeholders.

Job displacement: Stakeholders considered the potential for job displacement, particularly for administrative and support staff.

Data privacy and security risks: Especially concerning sensitive health information of patients
Feasibility Public timeline too optimistic: Widespread AI implementation within two years was viewed with scepticism due to system complexities and funding constraints. Securing necessary funding and staff training time were seen as major obstacles to timely AI implementation.

Need to evaluate once AI system is in place: Stakeholders were unsure about the likelihood of some elements of the scenario, and felt that these situations would only become clarified once AI was implemented, and the results could be seen
Policy Impact Human control and AI influence: Robust guidelines were regarded to be necessary for maintaining human oversight and control in AI-assisted healthcare, with clear expectations as to where AI is used and where it isn’t to ensure responsible AI use.

Equitable access: Measures to ensure equitable access to AI-powered healthcare solutions across regions and demographics.

A phased implementation approach: AI to be integrated into healthcare gradually, with continuous evaluation necessary to manage risks and maximise benefits.

Adaptation of medical education and training: To prepare healthcare professionals for AI integration. Ensure the preservation of critical human skills and judgment in healthcare delivery.

11. Conclusions and implications for policymakers

Participants co-produced priority policy areas which they felt should receive investment to ensure the public has the AI skills they require in the future. This section synthesises the key findings and outlines their implications for policymakers, offering recommendations for fostering a future-ready workforce equipped to navigate the evolving world of AI.

11.1. Key findings

Current AI skills landscape: A significant gap exists between the perceived need for AI skills and their current presence among the public and stakeholders alike. While technical skills like coding and AI development are recognised as valuable for working with AI, non-technical skills such as critical thinking, adaptability, effective communication, and ethical awareness are equally crucial were also discussed. Participants frequently conflated AI skills with broader digital literacy, highlighting a need for clearer definitions and targeted training.

Transformation of skills needs: Stakeholders anticipate a rapid evolution of required AI skills as the technology develops. While some roles face potential elimination due to automation, new roles requiring AI-specific expertise will emerge. The ability to interact with AI effectively (e.g., prompt engineering), critically evaluate AI outputs, and understand the ethical implications of AI will be increasingly important across several sectors. Across different sectors, the importance of retraining workers in those sectors was a key priority that stakeholders raised.

Varied sectoral impacts: The impact of AI and the associated skills needs vary significantly across industries. While some sectors, like professional services, are embracing AI for increased efficiency and productivity, others, like leisure and entertainment, express concerns about job displacement and the potential erosion of human creativity. Policy interventions must therefore be tailored to address the specific challenges and opportunities within each sector.

Public vs. stakeholder perspectives: Stakeholder perspectives often diverged from those of the public, particularly regarding the pace of AI adoption and the feasibility of certain future scenarios. Stakeholders frequently viewed public expectations as overly optimistic, highlighting the need for realistic timelines for them to develop knowledge and experience and clear communication about the potential and limitations of AI.

Conditions for acceptability: Stakeholders emphasised several conditions upon which they would accept the integration of AI into their industry. These included robust regulation and ethical frameworks, equitable access to AI technology and training, a focus on human-AI collaboration rather than replacement, and ongoing evaluation and adaptation of AI systems.

11.2. Implications for policymakers

Across the seven workshops, several policy implications arose related to ensuring that both the public and workers have the appropriate skills necessary to encounter and use AI in life and work. These were largely recognised to be the responsibility of the government to support, through collaboration with industry and schools to ensure that these skills are universally developed across the UK population.

Essential AI skills for the public and for workers should be defined and integrated into life and work

Policymakers must take an active role in defining and promoting essential AI skills amongst both the public and stakeholders in these industries. Stakeholders believe that this should begin with developing a clear taxonomy of AI skills that distinguishes between technical proficiencies like coding and the equally crucial non-technical skills important for the general public such as critical thinking, adaptability, and ethical awareness.

A holistic approach to AI skills development is essential, recognising that human capabilities are not simply replaced by AI, but rather augmented by it. This necessitates integrating AI literacy into education at all levels, from primary school through higher education, ensuring equitable access to resources and training across all demographics.

Government could provide support for workforce adaptation and reskilling through resources through collaboration with schools and industry

Stakeholders found that supporting workforce adaptation and reskilling is paramount for the future of AI in these industries. As AI transforms the job market, some roles, such as customer service and administrative positions, will inevitably be displaced, particularly in travel and transport industries. This means that investment may be required, to focus on targeted reskilling and upskilling programs to equip workers for new opportunities.

A culture of lifelong learning and continuous professional development must be fostered, enabling individuals to adapt to the rapid pace of development of AI, and so the evolving demands of an AI-driven economy. This could require strong collaboration between industry, academia, and government to develop training programs that align with real-world needs and leverage the expertise of all stakeholders. Stakeholders recognise that this would be a difficult endeavour, and require large amounts of resources, but believe it a crucial step to ensuring that professionals within these industries are capable of using AI both effectively and responsibly.

Robust regulatory frameworks and ethical guidelines could be essential within the next few years

Within several of the workshops, stakeholders referenced the need to establish robust regulatory frameworks and ethical guidelines on using AI both generally, and within their industry. Policymakers could look to address the ethical dilemmas posed by AI, developing clear guidelines for industry and the public on its development and deployment. These guidelines could address critical issues such as data privacy, algorithmic bias, and the responsible use of AI.

Regulatory frameworks could promote innovation while simultaneously mitigating risks, ensuring that AI systems are developed and used safely, ethically, and transparently. International collaboration on AI governance is also essential, working with global partners to establish universal standards and best practices for AI regulation.

The government could seek to ensure equitable access to AI technology and training

Particularly within education, stakeholders discussed the importance of prioritising equitable access to AI technology and training. Bridging the already existent digital divide was a critical priority for them, requiring investment in infrastructure and programs that provide equitable access to AI technology and training for all individuals, regardless of their background or location.

Targeted support for vulnerable groups, such as older workers and individuals in low-income communities, was perceived to be necessary to address the specific challenges they face in adapting to the AI-driven economy. Promoting diversity and inclusion in the AI workforce was also seen as vital, encouraging participation from underrepresented groups to ensure that AI systems are developed and used fairly and equitably.

AI systems should be systematically evaluated and adapted based on its impact and success

Stakeholders called for government and industry to work in partnership to ensure that the incorporation of AI into their sectors would be a productive, iterative process. The range of issues that they raised made them concerned that AI could potentially have a negative impact on their industry. As such, they saw the need to continuously evaluate, and update, if necessary, the role of AI in their sector. This would mean that potential harmful impacts of AI would be limited, and that the focus is always on ensuring that AI is providing a helpful service to workers and the public. Within this, they called for review of training provisions and upskilling initiatives, to ensure they are in line with the level of ability currently required, especially since they found skills needs in 10 years’ time difficult to predict.

Through all potential developments, the human-centric component of these industries remains the most important

Stakeholders were clear about the value of fostering human-AI collaboration as key to a successful AI future. Policymakers should champion a human-centred approach to AI adoption, emphasising the importance of human oversight and control in all AI systems.

As well as making sure that human control is in place, stakeholders felt that the focus should be on developing AI tools that augment human capabilities, rather than replace them entirely. In particular, they valued maintaining a human element of these industries, recognising that AI cannot replace social interactions. In fields such as medicine, education, and travel, the value of interacting with a human over an AI system was advocated. Stakeholders found this far more important than a potential increase in efficiency through using AI.

Supporting research on human-AI interaction is crucial to understanding the optimal ways for humans and AI to work together effectively and safely. Encouraging the development of explainable AI was also recognised to build trust and transparency, enabling users to understand how AI systems arrive at their decisions.

By addressing these complex and interconnected issues, policymakers can help shape a future where AI benefits all members of society.