Research and analysis

AI Skills for Life and Work: Public dialogue

Published 28 January 2026

This report was authored by LaShanda Seaman, Jacob Bellamy, and Eva Radukic at Ipsos.

This research was supported by the Department for Science, Innovation and Technology (DSIT) and the R&D Science and Analysis Programme at the Department for Culture, Media and Sport (DCMS). It was developed and produced according to the research team’s hypotheses and methods between November 2023 and March 2025. Any primary research, subsequent findings or recommendations do not represent UK Government views or policy.

1. Project Overview

1.1. Summary of Work Package

A public dialogue which aimed to explore public attitudes and perspectives on potential AI skills for life and work. Participants were engaged for approximately 13.5 hours to examine key issues and develop suggested policy levers which can be used as a step-change for developing AI skills.

1.2. Objectives/Purpose

Participants deliberated on the following key issues and areas:

  • How broadly/narrowly AI skills should be conceived: and to articulate what the public felt the skills (likely) to be required over a 2, 5 and 10-year timeframe.

  • Explore and understand their attitudes to different AI scenarios in life.

  • Examined the values and principles that inform these views.

  • What the public believed would be the challenges and opportunities of the different scenarios for the development of AI skills; particularly for developing AI skills for life and work over a 2, 5 and 10-year period (barriers, opportunities, opportunities to develop skills).

  • Any suggestions participants may have had in terms of policymaking and core levers required to accelerate and develop AI skills for work and life.

1.3. Key research questions and findings

Table 1.1: Research questions and contributions from this work package


Research question

Contribution of this work package

What AI-relevant skills are needed for life and work?

This work package focused on AI-relevant skills in life, but participants spontaneously explored work-related skills where relevant. Key findings included:

Discernment: Being able to recognise where AI was being used and how.

Critical and evaluative thinking: AI risks reducing the incentive to think critically by displacing the process of creating. Participants wanted AI to improve or supplement developing things, not entirely replace it.

Social and interactive skills: Participants worried that AI would displace human interaction, particularly in schools and healthcare. They linked this to a broader risk of ‘digitisation’ that was putting pressure on social skills, particularly for children.

How may these transform as the technology develops?

When considering the changing nature of skills as AI developed, participants generally expected:

Natural skills development: Participants felt the barrier to entry for using AI-powered products was low and would get lower as technology improved. This was particularly true for areas where AI changed how services work in the back end but did not affect the front end (such as apps, personal devices, or websites).


Changing nature of skills will affect work more than life: Participants generally expected skills to change more in work than in life, as AI puts pressure on a variety of job markets.


However, some skills that participants felt would become more prevalent as AI changed included:

Discernment: Determining what was AI and what was not was expected to become harder as AI improved. However, participants generally felt that flagging where AI had been used should be the responsibility of producers, rather than of consumers.

Creative skills: Participants identified creative thinking not just as a skill for work, but for life. Participants expected AI to be particularly disruptive in the creative industry, dulling the incentive to learn these skills.

To what extent does the UK have or lack these needed skills in the labour force?

Like other work packages, participants felt that skills related to using AI in the workplace were limited.

However, unlike the general public survey (WPB), participants were more confident around using AI in their daily life. They tended to define “skills” needed to use AI as relating to getting their intended output, rather than the broader set of skills covered in other work packages (such as risks, ethics and privacy). As such, most felt it was not difficult to use AI tools.

To what extent is the UK supporting people to develop future relevant AI skills?

N/A

Based on potential future scenarios, what should government, employers and private education and skills providers focus on to address any gaps in provision?

Participants raised several key policy considerations, including:

Investing in skills across all ages: Participants felt strongly that primary and secondary education curricula should be refocused around future work. However, they also flagged that re-skilling needs would be more significant in older populations, requiring investment in ad-hoc education.

Equitable access: As AI-powered learning and development tools grow, ensuring equal access to these tools was felt to be essential. This was especially in primary and secondary schools.

Regulating AI use: The risks around misinformation are high with AI generated content. Participants generally felt that the impetus should be on businesses or individuals using AI to flag where it had been used, rather than rely on the public’s skills in discernment.

Human control and AI influence: Participants suggested the introduction of clear guidelines for maintaining human oversight and control where AI was being integrated. This would mitigate fears around a transition and ensure key skills (especially in healthcare and education) were not lost.

What can the UK learn from international counterparts with regards to AI skills?

N/A

1.4. Relationship to other Work Packages

This work package was informed by content from all the previous work packages, particularly the summary of this content presented in the drivers analysis (Work Package 5) workshop in May 2024 and the WP5 report that was produced as a result. Some key findings relating to other work packages include:

  • Participants felt it was currently easy to use AI-powered tools, and that this would get easier as technology improved. This contrasts with WPB, where only 21% of the public say they are confident in their ability to use AI in daily life. This likely relates to different framing of ‘skills’ – participants tended to think about skills in terms of getting desired outputs from an AI product. This is narrower than the collection of skills asked about in WPB.

  • Findings on the importance of skills associated with risks and threats, particularly discernment and disinformation, reflect findings in the DELPHI study.

  • Ensuring that AI literacy skills are embedded at the primary and secondary level reflects key findings in the rapid evidence review (WP3), which suggested that the focus in the UK to date has largely been on AI literacy and skills at the tertiary education level. Participants also picked up on the UK being further behind in terms of its AI literacy than other countries.

  • Unlike the rapid evidence review, participants were more confident in natural skills development for the use of AI in life. WP3 suggested that digital skills might need to be reviewed every three years, while participants tended to expect AI tools to become more, not less, user friendly.

The researchers presented the draft scenarios developed in the public dialogue to stakeholders in WP6 so that they can be refined, stress tested and finalised. This will enable the Work Packages to include the perspective of employees and employers and inform the subsequent policy facing and knowledge transfer work taking place in Work Package 7 to equip policy makers with insight on key skills challenges and barriers.

Figure 1.1 Diagram showing how each work package contributes to the research and the order in which they are carried out

2. Executive Summary

2.1. Background

Ipsos was contracted to conduct research for the Department of Science, Innovation and Technology (DSIT) to explore the requirements for AI skills for life and work across the next 2, 5 and 10 years. This work package (WP) was designed to qualitatively explore the AI skills for life that the public would need across 6 thematic areas through a public dialogue. The areas under exploration were AI in the home and personal devices, AI in leisure and entertainment, AI in travel and transport, AI in education, AI in healthcare, as well as AI in work and career. 45 members of the general public were recruited to take part in 13.5 hours of deliberative engagement which included plenary presentations and breakout group discussions.

2.2. Research Aims

  • Complement the scenario development (WP6) by focusing on AI skills for life, providing qualitative depth on this area to sit alongside the findings we have gathered in the other WPs.

  • Inform subsequent policy-facing and knowledge transfer outputs in the summary report (WP7).

  • Build on research findings collected in the other WPs which have explored skills development with different stakeholder groups.

2.3. Research Objectives

  • Explore and understand the public’s attitudes to 6 different AI scenarios.

  • Examine the values and principles that inform these views.

  • Understand the public’s perceptions of the challenges and opportunities of the different scenarios for the development of AI skills, particularly over the 2, 5, and 10-year period.

  • Understand public perceptions on policy levers required to accelerate and develop AI skills for life for the general public

2.4. Key Findings

Most participants acknowledged that developments in technology (including AI) are commonplace within their daily lives now and are here to stay, meaning they will likely use these tools in the future. Most participants do not expect they will need more skills to use AI within the next 10 years but instead have the understanding and knowledge to control AI so that it is only used when they want it to be used. However, they would like to ensure there are guardrails in place to protect the public, and industry, as well as ensure there is adequate trust in AI.

Throughout the discussions, participants expressed uncertainty about whether the products and services they used involved AI. This was rooted in an uncertainty of the bounds of AI powered technology and confidence identifying AI powered technology. While some applications were perceived to explicitly highlight their use of AI, such as Spotify’s AI DJ, others, such as auto-tuned songs blurred the line between AI and ‘other’ forms of technology. This ambiguity made it challenging for participants to clearly identify AI’s presence in their daily lives. As AI is perceived to become more seamlessly integrated into various products and services, the distinction between AI-driven features and general technological automation it may become increasingly difficult for the public to discern.

Table 2.1 Key points of optimism and pessimism for each topic area


Points of Optimism

Points of Pessimism

Quotes
AI in home / personal devices
Saving people time: Participants highlighted how AI already does and could continue to save people time by carrying out day-to-day tasks for them, particularly tasks that are labour intensive.

Devices to become more user friendly: As AI devices develop further over the next 10 years, participants expected that they would become easier to use, and thus not require any upskilling.

AI assistants to become more human-like: Participants expected that as AI technology develops, devices will be able to replicate human conversation seamlessly, which could be a large benefit to help with loneliness and complete more tasks that humans traditionally do e.g. ironing.

Data privacy of customers: As the public start to use these devices more, participants were concerned that data privacy risks were more likely to occur.

Potential for overreliance on AI tech: Participants predicted that the public might become too reliant on AI technology due to it being embedded in tools but would be unable to fix these devices if they stop working due to a lack of understanding of how they worked.

Accessibility concerns: Concerns were raised around whether the public would be able to afford these devices and whether AI could divide society more between those who can afford equipment and technology which includes AI and those who cannot.

“I think it’s [AI] just going to be a progressive thing, really, and as we get older, you know? As it advances and as it improves itself, we’ll just go along and learn with it.” - Female, 65+, North West

“Will be the company to be more transparent with how your data is being used. Like, is it going to be sold? Is it going to be private? Can you delete it after you finish the service with them?” – Male, 25-34, Greater London



AI in leisure / entertainment
Lowering barriers to entry: AI tools are expected to broader the proportion of the public who is able to create content, “democratising” creativity.

Cost savings for businesses: AI integration into content creation is expected to enable significant cost savings which can benefit businesses of all savings.

Personalised content: AI is seen to have the potential to generate highly tailored media that suits individual preferences which could benefit the users / general public.

Devaluation of creative skills: Overreliance on AI tools may erode professional creative abilities such as imaginative thinking and idea generation. Use of AI in content creation could also reduce incentives to develop these skills among current professionals or young people.

Impact on creative careers: Concern about current and future creative roles disappearing was raised by participants based on the expectation that AI will continue to generate content.

Reduced content quality and diversity: Participants expected AI-generated media to be of lower quality and less diverse than human-created content or currently available content.

Difficulty regulating AI use: Challenges in updating intellectual property laws and protecting creative industries from AI disruption were priorities highlighted by participants.

Exacerbating inequalities: Potential divergence between cheaper AI-made media and more expensive human-created content, limiting choices for less affluent consumers.

“Once people become more educated about AI-generated content, they will turn against it, preferring human creativity instead.” - Female, 35-49, South East

“I think more people will make their living on platforms like YouTube going forward. So, more people will be like sole traders working for their own creating content. So, any AI tools that will help people get, kind of, started in an easiest possible way.” - Female, 35-49, Greater London

AI in work / career
Making the hiring process easier and more efficient: Use of AI in hiring was seen to drastically reduce the amount of time employers would typically spend sifting through applications enabling them to make decisions faster.

AI as an application-writing tool: For candidates who struggle to write applications, AI could be a useful first step, but there was pessimism about overreliance by those who were more confident with AI or how to navigate AI checkers.

Potential to reduce bias in hiring: Despite some consideration that AI could worsen biases in the hiring process, some participants expected that AI could have a more neutral outlook on applicants. They felt it would be able to avoid the subconscious biases that a human hiring manager would have.

Potential for hiring unsuitable candidates: Due to the use of AI selection software, participants foresaw that companies may hire people who are not well equipped for the role due to misrepresentation or less focus on soft skills.

Exacerbating biases: Concerns about the potential for AI to heighten biases existent in the hiring process. Participants thought that AI would feed off current biases and replicate them as it selects candidates, which would make the process unfair for candidates.

Difficulty in regulating the hiring process: Participants were not confident that any regulations in this space would bring reassurance to the public about hiring processes due to challenges of quality assuring each organisation.

“The amount of people submitting applications with AI will grow, but in terms of their success rate I think it will actually start declining, because more employers will be using that software to detect AI.” – Male, 25-34, Yorkshire and the Humber

“Hopefully, an AI system wouldn’t have that bias [rejecting obviously foreign-sounding names], and everybody would be judged equally on the strength of their CV.” – Male, 65+, North East and Cumbria

AI in education
Personalised learning experiences: AI is expected to offer more tailored learning, adapting content and pacing to individual needs and preferences.

Enhanced self-directed learning: AI can provide immediate feedback and support for learners pursuing independent study, making self-directed learning more effective.

Efficiency for educators: AI can assist teachers with tasks like lesson planning, grading, and providing targeted student support, allowing them to focus on high-value activities.

Inclusion and accessibility: AI-powered tools can better accommodate diverse learning styles and needs, including those of students with disabilities.

Lifelong learning opportunities: AI can facilitate continuous learning and upskilling as workforce needs evolve, making it easier for people to adapt to changing job markets.

Over-digitisation of learning: AI learning is likely to increase screen time and may reduce important social skills development if used as a substitute for human interaction, particularly in early education.

Erosion of critical thinking and academic integrity: AI potentially enables cheating and may reduce incentives for students to engage deeply with subjects, eroding critical thinking skills if used as a crutch.

Exacerbating educational inequalities: There are concerns that advanced AI tools might be limited to premium or paid options, increasing disparities in access to high-quality educational resources. Participants highlighted the importance of the government providing equipment e.g. laptops.

Threat to teaching roles: As AI becomes more capable, there may be incentives to replace or reduce teaching positions, which participants strongly opposed in favour of using AI to assist teachers.

Disruption to job markets and curricula: AI may significantly shift workforce needs, requiring substantial upskilling, curriculum changes, and lifelong learning support to help people adapt.

“The challenges at the moment in tertiary education are numerous, but I think you want to boil it down to academic integrity. What constitutes a helping hand in terms of producing academic work, and what constitutes over-reliance?” - Male, 50-64, South East

“Even if they had their own laptop, their own iPad that had an individualised programme, the kids I’ve got that I’m thinking of, they’ll need me to help them to access it or me to sit there and keep them in the classroom. AI is not going to help with those. It’s all about personal relationships, teaching. AI hasn’t got that. That’s the very first cornerstone of teaching.” - Female, 50-64, West Midlands

AI in travel / transport
Improvements for local transport systems: Potential to increase efficiencies and reduce costs in systems by automating tasks.

Increasing accessibility: For those who are disabled or live in remote areas with limited public transport, AI was expected to have a significant positive impact, enabling to have independence when travelling.

Optimisation of transport apps: AI-powered transport apps were expected to be developed further to offer an efficient service for customers. As such, usage of these apps is likely to increase.

Self-driving cars could make the roads safer: If properly tested, participants foresaw that self-driving cars would be a safer option than human drivers. They felt this would be only within a longer timeframe, perhaps further away than 10 years.

Safety concerns paramount: Participants worried about the safety of self-driving vehicles, valuing human control in potentially-life threatening situations over AI being in charge.

Significant job losses expected: Concern felt for those in customer service and vehicle operation-based roles, as they might be replaced by AI systems in the future. Expectation that these employees be retrained or upskilled to ensure they are able to adapt to new roles or find new roles.

Potential loss of essential skills: Due to the introduction of delf-driving cars, participants worried that essential life skills like driving and hazard perception would be at risk.

“I’m hoping that we’re going to have better or more efficient security checks.” – Female, 25-34, West Midlands

“I’m totally against the whole self-driving thing because I just think driving’s a life skill… I think taking away all those aspects changes humans in a way… I don’t think everything needs to be transitioned; I think it needs to be carefully selected what’s necessary.” – Female, 35-49, Yorkshire and the Humber



AI in healthcare
Improved end-to-end service for patients: Potential for improvements in overall speed, developments in research and diagnosis which would benefit the general public.

Increased efficiency in healthcare administration: AI might be able to streamline booking systems, treatment pathways, and resource allocation, allowing healthcare professionals to focus more on patient care.

Personalised preventative care and healthcare management: AI-powered apps and devices could enable tailored health recommendations and interventions, promoting proactive health management.

Accelerated drug discovery and development: Expedition of the identification of promising drugs is perceived to occur, particularly for complex diseases like cancer and dementia supported by AI.

Enhanced accessibility: Improved healthcare access for patients with disabilities or mobility issues due to using AI-assisted technologies.

Potential loss of human connection: Over-reliance on AI in healthcare by medical professionals may diminish the essential human touch and emotional support that the public expect, particularly in sensitive situations. Instead, they believed clear limitations should be in place for when AI would or would not be used.

Digital literacy should not be assumed: Recognition that some members of the public will require additional support to use new tools or be provided with an additional channel to access services (e.g. telephone).

Deskilling of healthcare professionals: Dependence on AI for tasks like diagnostics is believed to negatively impact the clinical skills and judgment of healthcare workers.

Exacerbation of healthcare inequalities: Perceived unequal access to advanced AI technologies across regions, income levels, and public/private healthcare settings may widen existing disparities in health outcomes.

Data privacy and security risks: The potential for the use of sensitive health information by AI systems raises concerns about breaches and misuse, particularly if data is collected by external parties.

“I just think those sorts of AI improvements, where it will save time for the NHS and it definitely needs it, is always going to be a good thing.” - Male, 18-24, South West

“Having a meal served by a robot is fine, but it takes away one element of human interaction in a hospital that we need.” - Female, 65+, London

3. Methodology

3.1. Dialogue aims

This public dialogue aimed to bring together a representative group of the general public from a cross-section of the UK to explore public attitudes and perspectives on potential AI skills for life. The aim of this method was to provide the opportunity for participants to deliberate on the different scenarios that were presented to them in order to consider the future development of AI and the skills they may need to live in a society where this technology exists and continues to evolve.

We recruited 45 participants using one of our recruitment partners, Roots, to take part in the research. They represented a wide cross-section of ages, ethnicity, gender, socio-economic background, education and skills level as well as employment statuses. All participants completed a recruitment screener to ensure eligibility with the process. They were also sent a recruitment pack ahead of the sessions to provide them with the essential information to take part and a copy of the key stimulus. The research team ensured accessibility was embedded into the full process, with the participant pack an example of this. Participants were able to print materials, read along with the moderators or review between sessions depending on their needs or general interest in the project.

Participants took part in 5 workshops over the course of 4 weeks, totalling 13.5 hours of engagement (Figure 3.1). The majority of the discussion was in small breakout groups on Zoom complemented by plenary presentations given by the chair and experts who provided supplementary information.

3.2. Dialogue approach

The dialogue was structured so that we could explore key issues around AI skills, feeding into the other WPs, and help understand policy levers that policy makers could utilise in this area. Please see an overview below:

Figure 3.1: Diagram outlining the research approach

The opening workshop provided an opportunity for participants to reflect on an overview of the current state of AI, including expert perspectives and their own experiences with AI. As participants all had a different starting point for AI, it was important to provide a shared foundation and common understanding due to the complexity of the area.

Workshops 1 – 3 focused on the 6 themes: AI in the home and personal devices, AI in leisure and entertainment, AI in travel and transport, AI in education, AI in healthcare, AI in work and career. For each theme, participants were provided with stimulus to support discussion including an overview of AI in that space as well as a persona to demonstrate different audience groups using the technology. The aim of the discussion was to determine what, if any, skills they believed the general public would need in the next 2, 5 and 10 years as AI develops in each of these areas.

Throughout the research process participants were able to critically question the inclusion of the information in the ‘learning’ stage as well as the inclusion/exclusion of the thematic areas.

To record their discussions, alongside the audio recording we enlisted the support of a Livescribe facilitator to capture the discussion and this reflected throughout the report with the public scenarios.

During the final session, participants co-produced the policy levers and were tasked with sitting in budget holder’s shoes and selecting the items which they would choose to invest in to help ensure the public had the skills they required for the future of AI skills development.

3.3. How to read this report

This report includes feedback from participants on 6 thematic areas. Within each chapter, there are current perspectives on AI within that theme demonstrating starting points, their views on how AI would develop as well alongside the skills the public would need in the next 2, 5 and 10 years, their conditions for acceptability, and the draft scenario they have devised.

Throughout the report we have include wording which is standard in qualitative reporting to express the strength of feeling from participants or the number of people who shared a point of view. There is some guidance below to help navigate this report:

  • Typically, sections include views that were most common first before including those that were expressed by a subset of participants.

  • Language such as a ‘a few’ or ‘a limited number’ is included to reflect views that were mentioned infrequently, and ‘many’ or ‘most’ when views were more frequently expressed. Whereas ‘some’ is used when a view point is mentioned some of the time, or occasionally.

  • We aim to include language that will demonstrate the strength of feeling.

  • Please note this report includes the perceptions of the public on AI / AI skills rather than actual facts.

3.4. Interpretation

While participants were guided by the moderation team to explore skills for life, some of the scenarios leaned more towards skills for work and participants shared relevant perspectives.

Additionally, the research team have indicated where they have interpreted areas of importance from participants to the ‘skills’ they may need for the future as this was often difficult for them to articulate or believe to be necessary.

4. Introduction to Scenarios

This WP was designed to inform the production of scenarios across 6 thematic areas. These scenarios are presented at the end of each chapter in the form of a visual graphic and an accompanying narrative table. An example of this can be seen here: Figures 6.1 and 6.2.

The visual graphic demonstrates the technological advances participants expect there to be in the next 10 years, clearly demonstrating the 2, 5 and 10 year differences as well as the skills that will be needed throughout that time period.

To accompany this is a narrative table which is developed to provide further context to the types of skills, enablers and barriers for development as well as policy implications in each thematic area. This narrative is based on the Go-Science Future Risks of Frontier AI scenarios[footnote 1]. As these projects both explore future scenarios for the use of AI and will be reviewed by similar stakeholder groups, it was important that relevant structures of the reporting could be utilised for comparability of conclusions, credibility of the findings and ease of internal collaboration.

The final output of this WP are draft versions of the scenarios which reflect the public’s perspectives of developments and skills related to AI across the next 10 years. The next step is to finalise these scenarios in WP6 based on feedback from stakeholders who work across industries related to the various themes. WP6 will produce finalized visual graphics and accompanying narratives which demonstrate the public and stakeholder perspectives.

These discussions informed into the development of a future skills scenario over the next 10 years which was demonstrated visually and with an accompanying narrative highlighting alignment with participant’s values and potential policy implications. Each chapter concludes with this draft scenario, for example Figure 6.1 and 6.2.

Following the development of the draft scenarios in this WP we are be conducting 7 stakeholder workshops in September which reflect the industries discussed. This will enable us to understand the perspective of employers and employees on the skills required to develop in this space. A final report outlining the finalised scenarios will be published as a result of those workshops.

5. Exploring current understanding and expectations of AI / AI products

Most participants were surprised by the daily items that had AI embedded within them and some challenged whether it was true that some examples (e.g. predictive text on their mobile phones) could be classified as AI. Instead, this was attributed to more general technological advances and automation of devices they regularly used.

Irrespective of this, most participants acknowledged using a variety of devices which were AI-enabled to complete routine tasks around their home (e.g. smart assistants) or to complete their journeys (e.g. Google Maps). However, the role of AI within these tools was often overlooked by participants, highlighting the low levels of friction when using AI in everyday tools.

Understanding: Participants highlighted that there is uncertainty over how AI tools are developed, particularly the data that is input into them. As such it is important to participants to have a better understanding of this data so that they know how decisions are made, can assess how fair these decisions are and whether these decisions can be trusted. Some participants would even like the ability to interrogate the data input to help validate the accuracy of the data before they use it. Additionally, a minority highlighted that this puts more onus on the public to critically think and conduct appropriate research on the data that AI provides them or the tools that they choose to use.

“… And actually, questioning how your model got to where it is today… I think it matters where that data is from. Is it representative of the populous as a whole? The question of who controls the data? It, kind of, has to everyone, all of us. We all have to be involved.” – Male, 25-34, South West

Human Control: While participants recognised that AI could play a role in efficiency, they would like to ensure that this is used alongside humans and has oversight. The inclusion of a human ‘switch off’ to be used if necessary is imperative for participants to trust the use of tools. Some participants mentioned that they would like to make sure that the robots do not take control over humankind. Additionally, they wanted to ensure that appropriate individuals and organisations, whose intentions they trust, are at the helm. Alongside this, participants felt it was important for a regulatory body to oversee the development and deployment of AI tools.

“It’s the company that’s writing the software on the AI that should be accountable for, you know, keeping control of it. But they should be under the guidance of a government body, I’d say.” – Male, 35-49, West Midlands

Privacy and security: Privacy and security were both important areas for participants. Some participants were keen to highlight the difference between the two, noting that while an organisation could have their information in a private location, it may not be secure. This was a priority for participants given the type and volume of information that AI could be privy to. Participants agreed on redlines such as information on bank details which they would like to be kept private. Additionally, a minority highlighted the need that in this age when more information is available online, individuals can keep some information private.

“I just think it’s really, really important that people have their privacy when online or, you know, you don’t really want your private life being splattered everywhere.” – Female, 50-64, North East and Cumbria

Choice: Participants recognised that AI is being integrated into more aspects of life, but they would still like the option to use AI or not use AI. Alongside this is the communication of when AI is being used so that an appropriate decision on whether to use a certain product can be made by participants. This is particularly important given that participants are not always clear on the tools / devices that use AI.

“I would personally rather choose what I did, I don’t mind being recommended things, but I want to choose it.” - Male, 65+, West Midlands

Inclusion: Inclusivity of AI tools was important for some participants given that they would like to ensure they have a wider societal benefit. A minority were sceptical about whether this would be a priority for large organisations that may be driven by profit and who would be responsible for ensuring that they do this.

“It’s a dream, it’d be great, but I don’t know if it’s ever going to really happen, even with AI.” – Male, 18-24, South West

Sustainability: Similar to inclusion, some participants raised questions about whether sustainability would be considered in the development and deployment of AI technology. However, for most participants, other values were prioritised higher than sustainability.

6. AI in personal devices / the home

6.1. Overview

Key Findings

  • Participants tended to support the use of AI in the home and on their personal devices, highlighting how it could be used to save time and increase independence for those with mobility issues.

  • Concerns were raised around data privacy and data storage. A small number of participants suggested that AI in the home might fuel greater economic divides.

  • Within 2-5 years, participants did not foresee much change in terms of the devices and the types of AI they would use in the home. They expected existing devices to become even more user-friendly, and that the frequency of use would increase.

  • Over 10 years, participants expected smart assistants to be able to replicate human-like attributes and conversational skills. They raised concerns about over-reliance on AI technology for daily tasks.

  • Policy implications include affordability issues, where certain demographics may be priced out of new devices, ethics and morals associated with how society may be impacted, and the need for increased vigilance over data privacy and security.

6.2. Current perspectives on AI in personal devices / the home

Most participants used AI frequently at home and on their personal devices. They were positive about the role of AI in these spaces and their ability to use them, seeing limited need for the public to upskill. The types of AI they used in the home included smart speakers, apps (like Hive to control heating), smart fridges, Ring doorbells, robot vacuum cleaners, and automatic cat litter boxes. Participants found integrated AI (e.g. mobile phones) easy to use.

“I was quite surprised how much the phone featured in it [my list], and a lot of the things were to do with the phone. Mostly, they were very positive things. Things like face recognition, all of these things are good to have, I think.” - Female, 50-64, South East

Some participants felt their home lives had significantly changed, mostly for the better, since the introduction of AI technologies. Positive impacts of AI in the home centred around AI making people’s lives easier and saving time on routine household tasks. Participants valued the convenience of AI tools such as remotely switching on lights, playing music, and controlling heating. There was an expectation that these would become more helpful as AI improved and be useful for people who were isolated in the home or require additional support (e.g. those with mobility needs).

“I’m disabled, and so I’m hoping it will make my life easier by doing more tasks for me.” - Male, 65+, Yorkshire and the Humber

Transparency was a key value raised by participants in this area, as they felt there is currently not enough awareness of how companies use data collected through AI systems, such as smart speakers. Some participants were hesitant about being listened to, with targeted advertisement making them feel uneasy. This also raised the issue of consent, with participants being aware that their mobile phones are listening to them, but not knowing how to turn the microphone off, or the location off for home devices. Some participants were worried that decisions as to what to do with this data would be motivated by money, rather than the safety and privacy of users. Participants wanted more control over how their data is collected and stored by companies with AI systems.

“Will be the company to be more transparent with how your data is being used. Like, is it going to be sold? Is it going to be private? Can you delete it after you finish the service with them?” – Male, 25-34, Greater London

“When you go online or you’re on your phone, you see you’re being bombarded with ads about the subjects you’ve been talking about. It’s like someone is always listening to you.” – Male, 25-34, Greater London

Some participants highlighted the benefits of AI-powered listening, finding targeted advertisements useful. These participants liked advertising that reflect their interests and needs, such as companies that produced sustainable or environmentally friendly products and services. However, most participants reported not regularly receiving content that aligned with their values.

“I love sustainable things, so I love when Instagram shows me a company that produces something that looks environmentally friendly. So, I would like to research it and check if I can change my washing liquid or stuff like that.” – Female, 35-49, South East

Some participants were surprised by how much they used AI in their daily lives, prompted by a definition which was used across the project. Services such as spam email filters and predictive text were not naturally associated with AI. As a result, participants suggested that the public should be made more aware of when AI is used in a product, so that they can choose whether or not they wish to use the product or specific AI features. However, this requirement may be difficult to execute in reality due to the presence of several different definitions of AI.

Participants generally felt they already had the skills necessary to use AI in the home and on their devices, and that these skills will naturally develop with practice and experience. For some, this was likened to the general adoption of technology we have witnessed in the new millennium, and how these skills have been embedded into public use.

“I suppose it’s like everything else, once you learn the ropes of it. It’s something new and you’ve just got to learn how to use it, haven’t you? The more you use it, the more you’re going to get more confident with it.” – Female, 65+, North West

6.3. Future expectations - 2 years

In the next 2 years, participants did not expect there to be much change in AI relating to home and personal devices. They felt that they would continue to use AI technology in similar forms and would probably use them more often, due to the products becoming more user friendly. Some discussed how they would expect early adopters of technology to continue to uptake new AI tools in the home at a higher-than-average rate (e.g. buying products first). They tended to expect to adopt the technology on offer after others had tested it.

“Because as you start replacing items and certain things get phased out, so, you, kind of, have to replace it with something that’s more advanced, which is like a natural progression.” – Female, 25-34, North East and Cumbria

Participants thought that while people would have sufficient knowledge of how to use AI products for what they are intended for, they would lack knowledge on how to protect themselves from the dangerous elements of using AI. Key risks included data privacy and becoming too reliant on technology rather than interrogating the outputs. Being able to have power over how their data will be used was a fundamental right participants expected to arise during this time to make this technology acceptable to them. As such, it was raised as an important policy implication, and an area for necessary regulation. Others also raised the issue that the technology is intuitive to use when it is working, but when it does not work, it can be confusing to fix and take time to figure it out which some participants were not comfortable with.

“If you let me delete my data, or I request to know what data you have off me, probably would make me a little bit more confident” – Male, 25-34, Greater London

Generally, participants felt that AI devices would become more user-friendly, and would arrive ready to use, making the process for the consumer more accessible. Participants foresaw improvements in AI over the two-year horizon and expected to see AI-powered devices improve concurrently.

6.4. Future expectations - 5 years

Participants’ expectations for AI in the home and personal devices in 5 years’ time were similar to the 2-year timeframe; many predicted that AI products will continue to progress and that as the technology advances, people will learn with experience. Participants felt that smart assistants would progress in their ability to help around the home with household tasks. They envisioned that existing technologies such as Alexa and Siri would be better able to answer to prompts and efficiently carry out user requests. Participants also predicted that the range of features that AI offers would grow, with suggestions including being able to monitor the temperature around the house, close blinds and draw curtains, or answer emails without supervision. They were supportive of these changes as they felt it would allow people to have more free time to spend outdoors, with family, or taking part in hobbies.

“I think we’ll just learn it as we go along for what we want to use it for, like how to programme it to do the ironing.” – Male, 65+, West Midlands

As with the 2-year timeline, participants were confident that AI products would continue to be user-friendly, feeling that specialist skills would be required only by the developers, and not by users. Participants also suggested that the public’s ability to discern when using AI-powered tools would improve as they become more familiar with their use and function.

Some participants highlighted that over the next 5 years, access to AI in the home may not be universal. Different levels of exposure to AI across the UK could mean that some areas might see higher rates of adoption than others. For example, they expected that the development of AI would be more rapid in urban over rural areas. We might suggest that this is owed to perceived differences in economic investment and level of income in these areas.

Concerns were raised over safety with using AI in 5 years’ time, as participants felt that AI would reach a level of maturity where it would be able to mimic secure processes such as banking. They expected AI scams to become so sophisticated that everyone, not only vulnerable people, would be susceptible to AI scammers.

“They will trip up even the best of us. Those that are potentially more vulnerable and not as IT literate, it makes them almost very easy targets” – Male, 25-34, South West

Due to these concerns over safety and privacy when using AI in the home, values also centred around responsible AI, with participants calling for regulators to ensure that AI is implemented and used ethically in society. A key policy implication recognised here was that these priorities were something which the government should be responsible for ensuring, rather than the individual consumer.

“Somebody has to be thinking about the ethics of it, and do we actually need to adapt society rather than provide a simple easy fix answer?… I think, a warning bell. I think people like regulators and government have to look at this as well, properly, so that it’s implemented in the best possible way.” – Female, 65+, Greater London

In summary, participants expected AI to grow to be even more user friendly, and with it, no necessary adoption of new skills. They expected, and wanted, some provisions to be put into place to better protect how their data is used. Responsible use of data collected by AI looks to be a key value for the policymakers to ensure that people are protected from risks of data privacy breaches.

6.5. Future expectations - 10 years

Participants expected that in 10 years’ time, AI smart assistants would develop their functions so that they have improved conversations which replicate humans and could be beneficial for those who live alone.

“I know Alexa is due to get more AI functions, so you’ll be able to have more human-like conversations, I think for the elderly and the lonely that could be very beneficial.” – Male, 35-49, North East and Cumbria

As in other topic areas, participants tended to conflate AI with other areas of technology. Some raised the concept of having robots do household chores for them in the next 10 years, which might be powered by AI tools, but generally fell out of scope for the research. Others raised the idea that by this time, in the 2 and 5 year scenario, we will have become very dependent on AI. This might threaten skills associated with the tasks AI was running, as well as human capacity to be flexible in the face of new obstacles or challenges. Participants felt that, whilst this technology is beneficial when it works, as we get more dependent on it, we may no longer have a ‘plan B’ to fall back on. As such, one participant raised the idea of homes having ‘redundancy plans’ and ‘disaster recovery procedures’ to counteract the issue of over-reliance of AI and enable them to successfully step in to take control.

“Like what happened now with the Microsoft update and complete chaos. So, I think, in 10 years’ time, it will be even worse if just one thing doesn’t work and then suddenly the rest doesn’t work as well.” – Female, 35-49, South East

Participants also expected the economic and regional divide to expand over the next 10 years, meaning that those in more deprived areas or on lower incomes may lose out from not having AI technology in the home. They considered such issues to call for policy interventions to ensure that certain groups do not get left behind by this divide. One respondent expected this to be fuelled further by the higher prices of new technology which these groups would be unlikely to afford, at least initially.

“I think areas that have more deprivation, I think they’ll suffer quite a lot from it. I think it will create an even bigger economic divide than there already is.” – Male, 18-24, East Midlands

6.6. Conditions for acceptability

Participants tended to accept the use of AI products in these areas. However, they discussed how they could only accept the presence of AI in this area if measures are taken to protect against the risks of increased AI usage at home and in personal devices.

  • Better transparency of companies using AI: to allow users to be aware of how companies are storing and using their data. People do not currently feel sufficiently protected or that they understand how their data is being used, and so are very concerned about more AI being used when the foundations aren’t secure. This was linked to regulation by respondents, requiring it to be made easier and clearer for people to see what data is being collected and stored. As well as transparency over what companies are doing with their data, participants also called for better control over how companies use their data expecting awareness of their data would be a foundation for this. The existing GDPR legislation was seen as a foundation for other regulation on data privacy related to AI to build on.

  • Need for trust in companies: Some participants recognised that they would have to accept that companies will obtain their data when using AI, but that they would have to trust that companies ‘actually’ delete or manage their data where relevant. They determined that companies should attempt to earn the trust of consumers, rather than just expect that users offer trust. Some held the higher expectation that they should not have to be in charge of managing their own data, and that instead this should happen automatically, whereby user data is deleted periodically by companies based on legal requirements.

  • Regulation on what companies can do with user data: Introducing a tiered system which determines how much data can be used and what companies can do with it was seen as a good compromise by one of the discussion groups to counteract this issue.

  • Introduction of a separate body to regulate AI: Alongside this, some participants felt there would need to be the introduction of non-governmental body to be responsible for regulating AI and data privacy, as many felt the Government’s remit was limited to legislation or raised questions of trust.

6.7. AI in the home / personal devices – public scenario

The visual and narrative below summarise how the public expect AI in the home and on personal devices to develop as well as the skills necessary to support this. By the end of the discussion, across the next 2 years participants expected AI devices to improve in their capabilities, enabling people to live more easy lives by freeing up their time. They also expected these tools to become more user friendly, and so not require any more skills than they already have.

Within 5 years, participants thought devices could help with energy saving and potentially provide companionship for those suffering from loneliness. However, they were also concerned about the maturity of these devices at this point leading to more scams which are harder to identify.

By 10 years’ time, participants felt that we would become reliant on AI technologies so much so that we would be dependent on them, and so struggle to recover if an AI product were to fail. Participants expected that AI devices would become more advanced so that they can integrate with each other and provide a holistic home system. They were unsure whether, if regulation were to be introduced, it would have an impact on companies producing these devices. These themes, and scenarios, will be explored further with a professional audience who work within smart devices, mobile devices and telecommunications in WP6.

Figure 6.1: Visual Scenario depicting AI in the home / personal devices for the general public

Table 6.1: Narrative for AI in the home / personal devices



Narrative

Capability for skills development

Natural skills development: participants believed that the necessary skills would develop naturally amongst the public, as it has already done with technology.

User friendly products: AI products in this scenario will continue to become more user-friendly and so not require the development of new skills.

Skills Overview


Personalised needs: skills mainly revolved around using the technology in a personalised way and knowing the correct prompts to use for each device.

Understanding of data usage: Along with this, a significant set of skills mentioned involved understanding how data is used and processed by companies, and the ability to switch off settings to maintain privacy and security.

Value alignment


Privacy and human control: expected skill development aligned well to values around privacy and human control. Despite participants not expecting to need to develop many skills in relation to using AI in the home, they recognised that useful skills would be related to understanding how data is used and processed by companies and knowing how to switch off settings to maintain privacy and security. This would allow people to feel more comfortable that using AI will not breach their privacy, and with their expectation of regulation, that AI would remain under human control.

Opportunity




Efficiency: AI in the home can continue to make people’s lives easier and reduce the time it takes to do tasks around the house.

Accessibility: Integrating AI and robotics could produce significant benefits for people who have mobility issues or other disabilities.

Personalisation: The ability to personalise these devices, for example to recommend diets, was seen to be an advantage, and could have knock-on implications for the healthcare services.

Challenge


Constantly listening: One disadvantage of these systems is that they are either always on or always off and so there is no middle ground.

Imperfect tools: Participants also referenced feeling annoyed by their AI products at home at times, due to interruptions during conversations or them misunderstanding the prompt.

Policy Impact


Morals and ethics: The potential societal implications over technology making us more isolated were considered by participants.

Affordability issues: Concern that people who need support could be priced out, for example those with health conditions or who the elderly, and so leading to access inequality.

Data privacy measures: Overall, there was cynicism about whether data privacy measures would be implemented effectively and how this could be regulated.

7. AI in Leisure / Entertainment

7.1 Overview

Key Findings

  • Within leisure and entertainment, most participants focused on the impact on the media and entertainment industry given the perceived increase in content creation. However, there were concerns about quality, creativity, and impact on human professionals.

  • Over 2-5 years, participants expect that AI will be further integrated into content creation enabling cost savings, and wider participation but potentially risking intellectual property issues and job losses.

  • In a 10-year horizon, participants expect AI-generated media to become highly sophisticated and universally available, which raised concerns about media literacy, content diversity, and the need for AI regulation and transparency.

  • Participants saw a gap in skills in AI discernment, critical thinking, media literacy and creativity. They foresee opportunities in personalisation and idea generation but challenges in career impact, content quality and regulation.

  • Policy impacts may include misinformation risks, conflict between creative industries and workers, and updating copyright and intellectual property laws.

7.2. Current perspectives on AI in leisure / entertainment

Participants interacted with AI in leisure and entertainment primarily as consumers of AI content. As with other topic areas, participants expressed some uncertainty as to whether the products and services they used involved AI. Some products and services explicitly highlighted their use of AI, such as Spotify’s AI DJ or AI powered opponents in games. Others blurred the line between AI and other forms of technology, such as holographic concerts or music production.

Participants noticed the increasing use of AI for recommendations on streaming platforms and social media. They felt that these recommendations were becoming more specific to their viewing and consuming habits. They expected recommendations would continue to become more specific, based on smarter, better-trained algorithms. Participants felt that this was useful in helping them get content they enjoyed but could lead to echo chambers where it was hard to find new media or think in new ways. Some participants raised concerns about being tracked across all their online activity, as this was felt to be an invasion of privacy.

“When you put a like on something, often YouTube says, ‘Oh okay, got you, I will give you more videos like that,’ so obviously they are tracking everything you are looking at to feed more into you.” – Female, 35-49, Greater London

Participants also observed AI within content itself, especially in music and gaming. Auto-tuned songs were provided as examples of AI augmentation, though some uncertainty existed about whether this was driven by non-AI music technology. Some participants also noted the use of AI to produce music made to sound like past artists, such as Elvis. Video games were noted to have AI-powered characters that display more advanced behaviour like long-term strategy instead of easily predictable movement. Participants generally felt that this current use of AI in content creation was ‘gimmicky’, rather than being valuable or high quality. Video games were an area that generated more excitement among some participants, who felt that AI could offer interesting customisation and richer experiences. Example included customised story paths and non-Player Characters (NPCs) that respond like an LLM would.

AI is being used in chess now. So, like, you can make an AI analyse a position and then all the top Grand Masters are learning from the AI, and now beginners are also starting to learn from it as well.” – Male, 18-24, Greater London

While discussing the use of AI to produce media content, participants debated the quality and amount of “soul” in computer-generated media. It was generally felt that AI-generated content was currently poor or not interesting to consume. There was also a sense that AI-generated content felt “generic”, and they would be less likely to engage with it. Participants held mixed views on whether this would improve in the future.

“’Those That Must Die,’ about gladiators in Rome, and it so obvious that the script was written, at least partially by AI. When a person writes a script, or a book, or something, it has something personal in it. This was just so bland it was boring”.** – Male, 65+, West Midlands

Participants generally had limited experience of using AI to produce or help create media. Among those that had, participants discussed existing applications like AI-assisted music composition tools to help quickly build songs, though the outputs were felt to be short on quality. Others had tried AI image generation, although also felt that the results were not professional. Participants tended to feel that these tools would become increasingly integrated into the creation of media in the future. As such, there was felt to be a need to have some proficiency in using them, albeit with mixed feeling about the impact on quality.

Participants highlighted a lack of confidence in distinguishing AI-led misinformation online. Developing skills in identifying artificial versus human-generated content was deemed crucial to mitigate the associated risks. Participants expected this to get more challenging in the future as AI improved. As an example, participants highlighted realistic fake media shared in relation to the war in Ukraine.

“I saw a Russian propaganda deep fake from the Ukrainian war. It was something that they put on Russian state television, and then after it was on this YouTube channel. And they basically showed the real image and then they showed the deep fake, and the deep fake was what was on Russian state media. There was no way you could tell that was a fake. It was perfect.” - Male, 35-49, West Midlands

With the increasing integration of AI into the media landscape, participants raised an understanding of ethics as a key concern and important skill. As lines blur between human and AI outputs across entertainment domains like scripts, songs, and visual art, participants felt that society must determine acceptable thresholds for automation of outputs and their use. Participants also raised the issue of fair use and compensation, which is particularly important in the creative industries. Considering policy, they highlighted the need for regulators (e.g. Trade unions and IP experts) to stay ahead of the curve to ensure the protection of those who work in the creative industries.

“I don’t know if there actually is any legislation in place to stop a person from putting out a song that they’ve written, but it sounds like Rihanna who’s singing. … I think the consumer’s always pushing for the next thing, whereas the producer, and you could say the government, is always behind on that.” - Male, 25-34, Yorkshire and the Humber

“As a consumer, as long as the end product is as good or better, I haven’t got a problem with them using it.” - Male, 35-49, West Midlands

7.3. Future expectations - 2 years

Over the 2-year horizon, participants expected increasing integration of AI into creating and delivering media, arts, gaming, and other entertainment content. This automation is predicted largely for cost savings rather than quality improvements. Participants felt that music would be a particular area of growth, with tools like AI music composition apps that can draft songs by genre. When considering the industry perspective, there was an expectation that a broader proportion of the population would be able to create content. However, some participants raised concerns around the impact this might have on human creatives’ ability to compete on price.

“I make pretty much all my living through music, and a lot of that is through song commissions. Therefore, it does worry me that AI can produce a song. You just need somebody to just input a button. That concerns me, in terms of job prospects.” - Non-binary, 35-49, Scotland

Participants generally expected the quality of AI-generated entertainment, particularly music and art, to be low. However, perspectives differed on whether this would put consumers off. Some participants believed that the leisure and entertainment market was one in which consumers’ purchasing habits had more influence how that content was made compared to other sectors. Others suggested that the cost savings associated with using AI were too high, and people would accept some decrease in quality. This would decrease the choices of content available. Some believed that explicitly “human-created” content may become a unique selling proposition where innovators incorporate AI only as augmentation tools for willing artists rather than wholesale replacement. Participants generally felt that was an inherent value that individuals put into a craft which may be increasingly appreciated.

“I do think how we consume media and entertainment is going to change with AI… you can go and create your own programming with things like YouTube and Twitch. You can go and choose what you want to consume, when you want to consume it.” - Male, 25-34, South West

“Now, we have a streaming platform with an AI person that goes, ‘Because you maybe hinted on looking at that, we’re going to send you a whole bunch of things your way.’ … They make us think that we have the choice, but we don’t.” - Non-binary, 35-49, Scotland

Participants raised concerns about intellectual property over the 2-year horizon. With AI using existing content to create its own, participants flagged risks and legal challenges as AI is increasingly used to create content and integrated into familiar systems. A minority highlighted how they felt a loss of control for any content produced due to how organisational policies (E.g. Adobe) were evolving outside of their control. Some push back was expected as artists and creators looked to keep control of their content, and that conversations about policy and legislation would follow. Participants related this to burgeoning questions about ethics and how celebrity likenesses could appear in different settings – “is it okay to have David Attenborough commentate a football game?”

“I would be more worried now that AI is trying to steal actresses’ or actors’ faces and voices, and you can just suddenly put anyone anywhere, and the real person is not even aware of it …” - Female, 35-49, South East

7.4. Future expectations - 5 years

Over the 5-year horizon, participants expected AI to have an increasingly significant impact across the leisure and entertainment sector. They foresaw AI being used to generate a wide range of content, from music and scripts to video game environments and storylines. Some participants predicted a divergence, with cheaper AI-made media competing against more expensive “artisanal” human-created content, building on the 2-year scenario. Participants generally felt that consumers would prefer human-created content, particularly for entertainment with a physical or “real-world” aspect, such as concerts or art exhibits. Some participants felt this might exacerbate existing inequalities, with the wealthy able to purchase human-created content while others were likely to be priced-out and restricted to AI-generated media.

“Once people become more educated about AI-generated content, they will turn against it, preferring human creativity instead.” - Female, 35-49, South East

Participants also expected the barriers to content creation to continue to lower. However, some participants worried that this ‘democratisation’ of creativity may devalue the skills of professional creatives and further saturate the market with AI-generated content. In terms of skills, participants noted that AI may enable smaller teams and ‘jack-of-all-trades’ creatives, as most of the work could be performed by an AI. However, they cautioned that an overreliance on AI could lead to the erosion of creative abilities, and a loss of ‘human touch’ in creative work.

“I think more people will make their living on platforms like YouTube going forward. So, more people will be like sole traders working for their own creating content. So, any AI tools that will help people get, kind of, started in an easiest possible way, I think that will definitely develop.” - Female, 35-49, Greater London

Participants expected further legal and ethical issues to arise around intellectual property rights as AI learns from and mimics existing works. Regulation was seen as inevitable but potentially slow-moving. Participants tended to focus on two potential types of regulation: those concerning or protecting human labour, and limits around what AI can and can’t be used for. Unions were anticipated to push back on AI use to protect creative industry jobs. Participants anticipated policy requirements for transparency in disclosing AI’s use and felt strongly about flags or identifiers highlighting when something had been made with AI (e.g. credits). Some participants suggested potential bans on AI-generated news media to ensure public trust remained high. As AI’s capabilities grow, authenticating human-created content could become a selling point for some content types.

“They have to be regulated, I think, so that doesn’t happen. There has to be a duty of care for the actual workers, to ensure that they keep their jobs because we’re not interested in watching AI at the moment. We want to see the actual actors.” - Female, 50-64, Scotland

7.5. Future expectations - 10 years

Looking ahead to the 10-year horizon, participants anticipated AI’s impact on the leisure and entertainment sector to intensify, with regulation struggling to keep pace. They foresaw AI being used extensively to generate content across music, film, gaming, and more. However, participants raised concerns about the implications for creative professionals, media literacy, and the overall quality and diversity of entertainment.

Participants expected AI-generated media to become increasingly sophisticated and convincing, blurring the lines between human and machine-made content. Some worried this could lead to a proliferation of derivative, lower-quality art that dulls people’s skills in critically evaluating media. This was felt to have a potentially broader impact on media literacy, with “generic” AI content making people unable to think critically about what they consume or watch. A few participants noted that the increasing use of algorithms to show people the same type of content would increase this risk. Some participants suggested that creatives might take fewer risks in order to fit into the market, at the expense of more ambitious, “epic” creative works.

“But you listen to one thing on YouTube, or watch one thing on YouTube, and then, it just focuses on that area of you. I’m a very complex being and I don’t just watch that particular type of thing, I watch a lot of different things. So that’s a bit irritating, but I can see that that will develop.” - Female, 65+, London

As in the 5-year scenario, participants anticipate a divergence between cheaper, AI-generated content consumed by the masses and premium, human-created works for wealthier audiences. However, they also recognised that many consumers may not care about the origins of their entertainment as long as it appeals to them. Some participants highlighted that, in the long run, AI may enable more choice of content through personalisation, such as “choose your own” stories in movies and video games.

“I love the idea of being able to control and play through… if you’ve got a film, you can actually decide, ‘Which way is this going to go?’ That’s quite cool. You get to be a bit more creative, in that sense.” - Non-binary, 35-49, Scotland

The need for regulation and transparency is seen as more pressing in the 10-year timeframe, particularly in areas like news media where AI-generated content could be used to mislead or manipulate. Participants called for clear labelling of AI’s use and potential restrictions on certain applications, most importantly in news and media. However, they expressed fear that regulation will lag behind the rapid advancements in AI. Participants anticipated the need for a global policy on AI to regulate tools that cross borders and to reflect that technology cannot be tackled in a silo. A minority felt that the UK could take the lead in bringing this group together or in developing legislation that would have a local impact, such as copyrights.

“I think there needs to be a coming together of the global governments across the globe, because is our own legislation going to impact what is a global phenomenon?” - Female, 50-64, South East

“There is huge potential for the UK to get ahead of the curve, in legislating things like copyright and artists’ rights” - Female, 35-49, North West

7.6. Conditions for acceptability

As outlined in the 5 and 10 year expectations, participants had a strong focus on regulating the increasing use of AI in media and entertainment to increase the acceptability of its use. Participants generally considered three types of regulation:

  • Regulating the use of AI in generating disinformation: Participants noted serious risks of misinformation with the use of AI in news and media. This misinformation could be either malicious or unintentional. Participants suggested introducing flags or ‘labels’ on content to highlight when it was AI generated. These would ensure that consumers could evaluate and determine the differences between AI and non-AI generated content. Some also suggested banning the use of AI in news media entirely.

  • Legislating to protect creative industry jobs: Participants felt that jobs in the creative industry would be at risk as AI was increasingly used to produce content. Examples of early moves in this space, such as the writers’ strike in the US, were given. This would risk dulling creative skills, such as imaginative thinking or idea generation, as the market for these skills tightened. Participants generally felt it would be important to protect these skills more than protect these jobs.

  • Copywrite and intellectual property laws: Participants agreed that property rights risked being violated as the use of AI continued to grow. Since AI learns from existing data, it has already ‘used’ media without the express permission of creators. This also tied to ethics, with participants questioning whether the use of the likeness of existing and former celebrities without their permission was acceptable. With AI moving quickly, some participants felt that these steps should be taken sooner rather than later.

7.7. AI in leisure and entertainment – public scenario

The visual and narrative below summarise how the public expect AI in leisure and entertainment to develop as well as the skills necessary to support this. Over the next two years, the public expects AI to be increasingly integrated into the creation and delivery of media and entertainment. This shift raises concerns about the impact on human creatives’ ability to compete, thereby threatening creative skills.

Over the next five years, the public anticipates AI will significantly impact the leisure and entertainment sector, generating a wide range of content and potentially leading to a divergence between cheaper AI-made media and more expensive human-created content. While this ‘democratisation’ of creativity may lower barriers to content creation, it could also devalue professional creative skills, saturate the market with AI-generated content, and erode creative abilities due to overreliance on AI tools.

Finally, over the ten-year horizon, participants anticipated a scenario in which increasingly sophisticated AI-generated content blurs the lines between human and machine-made media. This could heighten the need for robust regulation and transparency measures to keep pace with AI’s rapid advancements. These themes, and scenarios, will be explored further with a professional audience who work in the creative industries in WP6

Figure 7.1: Visual Scenario depicting AI in leisure and entertainment for the general public

Table 7.1: Narrative for AI in leisure / entertainment


Narrative
Capability for skills development
Discernment: Being able to recognise where AI was being used and how.

Critical Thinking: Ensuring that people do not take news or media at face value.

Media literacy: In a world of increasingly similar AI generated content, being able to evaluate and understand media will become increasingly important.

Creative skills: With the creative job market under threat, skills such as imaginative thinking or idea generation may be dulled.
Skills Overview
Discerning disinformation: Already an issue in a non-AI saturated world, the potential for disinformation to spread becomes much higher with AI. Critical thinking and evaluation of media online low and with significant capacity for improvement.

AI media production apps: Tools that can create art, music or other media based on text inputs. Low barrier to entry and easy to use.

Creative skills pipeline: Currently strong and a competitive advantage the UK has. Participants expressed concern that AI-generated media might threaten these skills.
Value alignment
Privacy and Security of their information: The increasing use of AI for personalised recommendations on streaming platforms and social media made participants feel that their viewing and consuming habits were being monitored, which some saw as a privacy issue.

Choice of whether to use AI: Participants discussed the potential divergence between cheaper AI-generated content and more expensive “artisanal” human-created content. This raises concerns about inequality, where wealthier individuals would have the choice to consume human-created content, while others may be priced out and restricted to AI-generated media.
Opportunity
Saving costs: There are significant costs savings to businesses looking to use AI to generate media.

Lowering the barrier to entry for creative outputs: individuals will be able to produce more content directly and have better access than ever to channels of communication.

More tailored outputs: AI has the potential to create personalised media that best suits people’s preferences and skills.

Idea generation: Utilisation early in the creative process across a range of industries.
Challenge
Impact on careers: present and in the future, with less entering the creative spaces or roles disappearing without the public realising.

Difficulty around regulation: Increasing use of AI in the content creation sphere will result in pushback from industry professionals and those affected by potential copywrite infringements.

Reduced content quality: Participants expect the quality of media to go down with AI use.
Policy Impact
Increased risks of content exposure: The risks around misinformation are high with AI generated content. Children and young people will also have access to tools to be able to generate this content themselves, which may pose risks to safety.

Protecting industries: Participants foresaw conflict between workers / unions and businesses.

Copywrite and intellectual property laws: Are unlikely to keep up with the pace of AI adoption, and the specifics of how to regulate use of people’s likeness.

8. AI in Work / Career

8.1. Overview

Key Findings

  • Participants tended to focus on the use of AI in applying for, and screening, jobs. AI was generally seen as a benefit for hiring managers, making the hiring process easier and more efficient. It was also recognised to potentially aid applicants, who could use AI to support the writing of applications.

  • Participants foresaw challenges in hiring unsuitable candidates due to the use of an AI selection software, as well as the potential of AI systems to exacerbate biases during the recruitment process.

  • Within 2-5 years, participants expected that most companies would be using AI in some capacity to hire new staff.

  • Over the 10-year horizon, participants were unsure about how AI might develop in the hiring process. They expected applicants to have encountered AI more when applying for jobs and expected them to have had training to manage these changes.

  • Policy implications included ensuring that the hiring process is fair and does not regurgitate human biases, and that there is transparency with candidates over when AI is being used during hiring.

8.2. Current perspectives on AI in work / career

Participants shared reflections on the use of AI in the recruitment process, typically in completing applications and hiring. However, they also shared more general reflections of their interactions with AI in the workplace.

Many participants reported that they currently use AI products, such as ChatGPT, at work. Uses included writing up meeting notes and drafting reports, which participants generally felt LLMs were good at. Several participants referenced using AI for recruitment. This involved activities such as approaching doctors on LinkedIn or using an app which matches the business owner with jobs that would suit and emailing potential clients automatically.

Participants spontaneously raised the subject of training employees on AI skills in the workplace; they felt that this was something which companies are not doing enough, and which should be carried out more in the future.

“There’s some catching up to be done, and I guess there’s a big industry gap for retraining people, you know, if everybody needs to be upskilled then there’s a lot of opportunity for businesses to upskill people.” – Male, 35-49, West Midlands

The majority of participants had not had direct experience with using AI during the hiring process, either as a hiring manager or as an applicant. However, of those that had encountered AI in this way, participants’ main concern was over the potential risk of bias potentially reversing improvements that have been made in recent years. Some worried that AI could regurgitate biases that humans hold, and so that it would worsen the issue of subjectivity, rather than resolve it while others felt it could make the process more equitable. Many participants felt it would be unfair not to have a real human looking through applications and making the final decisions on candidates and help alleviate their concerns. Instead, they were happy to have it used in the sorting process with the assurance that a manager would complete the process. Younger generations expressed more caution about hiring from AI enabled tools, which could be due to their more recent experience of going through the hiring process and potentially encountering AI.

“I worked with this girl at my last company, and she said she’d applied for hundreds of jobs under her actual name, which was quite hard to pronounce, and then she applied for the same jobs using her middle name, which was a generic name, and got the job. I feel like it could be used in that way quite negatively as well.” – Female, 25-34, East Midlands

“If you’ve got a big stack of applications come in, is quite difficult, you end up binning people just for silly things because you’re trying to whittle it down… if there had been a tool that would have scanned the whole thing, it would perhaps have made a more rational reason for shortlisting people, it would have been useful.” – Male, 65+, North East and Cumbria

Participants raised concerns about how the applicant journey may be impacted by using AI. Participants discussed how employers are already asking for a video CV and feeding these into an AI selection system, which will determine a shortlist without human involvement. Others shared how, because companies may use an AI system which searches for key words or phrases, they felt pressure to identify and include search words to bypass the system to prevent rejection.

“How am I thinking about how to frame things for a computer system instead of framing it for a person to read. Because, you know, there’s that lack of intuition then, that you’re dealing with in the job hunt, and the selection process.” – Female, 35-49, North West

Participants who had experienced AI in the hiring process generally reported a negative experience. They felt the process was impersonal and unnatural, meaning that it could be unfair for applicants who are unaware of exactly how the AI system works. They felt there was a need for the ‘human skills’ involved in hiring, including getting a sense for character or other harder to quantify skills for organisations. Additionally, they felt the public would be required to use a different set of skills to complete these applications successfully and provide the appropriate information.

“To feel comfortable speaking to a screen is very different to how you could feel just speaking to a person… only certain people, with certain specific things that they’re looking for that you’re not going to be able to know about, are going to get through, and does that necessarily mean that they are better in any way? I don’t think it necessarily does mean that.” – Male, 18-24, South West

“Having a great CV written, whether it’s AI or a person who specialises in writing CVs, doesn’t mean that you are a great employee. Then you realise you’ve employed someone who looks great on paper and then he or she can’t do a thing.” – Female, 35-49, South East

Some participants discussed the trade-off between the perceived problems of AI in the hiring process and businesses needing to hire the most skilled workforce and reduce their costs. They generally concluded that businesses would take the option that best suited their bottom-line. As such, participants raised the concern that using AI in hiring may make the process less robust, with the possibility that unqualified candidates may make it through if they work out how to ‘beat the system’, leaving businesses with an inadequately skilled workforce.

8.3. Future expectations - 2 years

Within 2 years, participants expected to see the use of AI in the hiring process increase for small businesses. Some participants predicted that AI would be used by most, if not all, companies over the two-year horizon. One participant, who had previously worked as a hiring manager, felt that this shift would reduce the time spent on recruiting by hiring managers and generally improve efficiency. This led participants to consider the policy implications of using AI for hiring, calling for the process to fair and equal for all applicants, and not perpetuate biases.

“I think the next 2-3 years this will start to manifest in every part of the country.” – Male, 25-34, East Midlands

In terms of applying for jobs, participants felt AI could be useful for individuals with lower literacy levels or that feel less confident articulating their thoughts to complete their applications but may still be a good candidate for other skills the business required. However, participants also discussed risks that AI might lead to uniform or generic CVs, with similar patterns of words and writing styles. This would make distinguishing eligible candidates harder, rather than easier.

8.4. Future expectations - 5 years

Across the 5-year time frame, participants expected businesses to begin to discourage candidates from using AI to write applications. This could happen in two ways; technology would develop further to enable businesses to better identify where AI is being used in applications (similar to university submission), or new ways of screening candidates would emerge. This could include video applications and in-person interviews or aptitude tests. Participants discussed how they felt that video interviewing would start to take a greater priority over written applications, to avoid participants using AI to write their applications and to enable recruiters to maintain their importance in the hiring process.

“You’re going to see more countermeasures I feel from the other side in order to counteract the use of AI, and I think it’s going to be things, you know, like video cover letters, and things like.” – Male, 25-34, South West

Over the five-year horizon, a small number of participants suggested new uses for AI in the hiring market, including AI to headhunt for a specific position. Instead of filtering down a pool of exiting applicants, this type of AI would search across online job markets for specific candidates with the appropriate skill requirements and invite them to apply.

“If you could give AI access to peoples’ data and you wanted to effectively headhunt someone for a specific position, then you could make an AI programme go and find the perfect applicant, and then you could contact-, they might not be looking for a job but that would be the perfect person, or people, for a position.” – Male, 35-49, West Midlands

Participants felt overall that, whilst the use of AI for hiring would grow for most companies, AI would not replace the entire hiring process, and there would still have to remain a human element within the decision-making process. As with other topic areas, maintaining a sense of human control was felt to be an important value in accepting and managing a high AI use in the future. They also discussed the need for transparency, calling for policy to reflect this value by ensuring that companies are honest about when they are using AI during a hiring process.

8.5. Future expectations - 10 years

Participants struggled to anticipate how AI might develop in work and careers in 10 years. As with other topic areas, participants aligned their expectations for the development of AI with other technologies. For instance, one participant shared suggestions that robots might be conducting interviews in 10 years’ time.

As an applicant, there was the expectation that, in the long term, the public would have developed the skills to determine which key words AI systems might be searching for and so will become better at incorporating these into their applications. Changes to the application process and job market might also have filtered down into the education system, better preparing people to apply.

“Maybe in 10 years’ time, the artificial intelligence will be more intelligent, but I don’t really see it. I think there is a limit of how much this intelligence can learn.” – Female, 35-49, South East

Another point made was that the traditional employment market may evolve over the next 10 years as a result of AI, but separately to its implementation in the hiring process. One participant imagined that new workspaces might make people less inclined to follow the traditional corporate culture of the labour market, and instead may work as contractors. This increase in contractors may be powered by a lowering of the skills barrier due to AI, with it able to automate services like accounting or marketing. This reflects suggestions made in the 10-year horizon for leisure and entertainment. One participant predicted that with more contractors, hiring processes would change – shifting more towards freelancer-style short contracts, rather than full time employment. However, participants generally maintained that even after 10 years, AI would not replace the whole hiring process.

8.6. Conditions for acceptability

Many participants felt that, as with other scenarios, people will come to accept the use of AI in the job market. Some conditions to mitigate the perceived risks associated with this expected transformation included:

  • Maintaining a human aspect to the hiring process: participants did not want AI to replace the process entirely. Participants emphasised the value of human input and so required that AI remain an assistant and not replace human roles. They recognised a trade-off where they cannot stop companies from using AI in the initial round of hiring due to the efficiency it provides.

  • Regulation of the use of AI in the hiring process: Participants generally felt that the use of AI would be acceptable if biases were shown to be minimal or limited with human involvement. They also noted a condition whereby companies should inform participants when AI is being used to screen or assess their applications, which would allow them to place more trust in the company’s hiring process and consider it to be fair. Participants considered having an AI programme written to check that the programmes used for hiring are fair, which was recognised to be faster than a human checking it. However, participants tended to be sceptical about the feasibility of enforcing these regulations on businesses.

8.7. AI in work / career – public scenario

The visual and narrative below summarise how the public expect AI in work and careers to develop as well as the skills necessary to support this. Over the next 2 years, participants expected more companies to begin to use AI in hiring processes, especially for initial sifts and at first round stages. They were concerned about the level of bias that might be perpetuated by the AI systems involved and also how this could increase the gap between those who use and those who do not use AI already.

Across the next 5 years, some participants expected there to be a major shift in the job market, and the potential decline of the recruitment industry. They felt that a human should still be in charge to have the final say over hiring and envisaged the potential use of AI to check fairness in the AI systems which are using in hiring. The main skill they expected to be linked to this development was critical thinking, both on the part of the hiring manager to recognise if an applicant has used AI, and on the part of the candidate to navigate a potential AI test involved in a hiring process.

Within 10 years, participants felt that regulation over data processing may occur. They were hopeful that AI would be used as a tool which works alongside not instead of people. However, they saw that customer service jobs would be in significant decline by this point, with AI likely to replace them. These themes, and scenarios, will be explored further with a professional audience who worked in professional services in WP6.

Figure 8.1: Visual Scenario depicting AI in work / career for the general public

Table 8.1: Narrative for AI in work and career


Narrative
Capability for skills development
Recognition of AI use: The main skills development recognised to be in place was for hiring managers to be able to recognise when AI is being used in applications, supported by a software or by critical thinking skills.
Skills Overview
Passing the ‘AI test’: Participants felt that people would need to develop the skill of being able to write applications which can bypass AI systems over the next 10 years, which could include incorporating key words or phrases. This could be extended to completion of AI video interviews depending on technology developments.

AI use in applications: Potential for candidates to use AI to support with writing applications.
Value alignment
Transparency: Both on the part of the applicant and the hiring manager in terms of when AI is being used.

Human control: Ensuring that AI remains a tool employed during the process and that decisions over recruitment remain with hiring managers

Fairness: Ensuring all applicants can have an equal chance during the process and that biases are not existent in any AI software used.
Opportunity
Efficiency: Participants mainly recognised the benefits for the hiring managers, as AI could reduce their time taken to hire new staff.

Idea generation and application writing: Applicants who may struggle to write a strong application, but still be effective candidates, could use large language models such as ChatGPT to help.
Challenge
Potential recruitment of unqualified people: AI could lead to the recruitment of unqualified people as it may undervalue skills such as communication, charisma or empathy.

Loss of jobs: Concern over jobs being lost due to AI replacing the recruitment process and so reducing the need to hire recruiters.
Policy Impact
Regulation of the hiring process: There was an expectation that the government would need to regulate the hiring process, such as informing applicants when AI was being used.

Ensuring AI is used fairly in hiring: Participants wanted to ensure that use of AI would not unfairly disadvantage certain groups by perpetuating biases.

9. AI in Education

9.1. Overview

Key Findings

  • Participants had positive outlooks for the use of AI in self-directed learning. Concerns centred on AI’s impact on critical thinking, social skills, and equity in formal education settings

  • Over the next 2 years, AI is expected to offer more personalised learning experiences, but there are practical barriers to implementation in schools and risks of exacerbating inequalities in access.

  • Over the 5-year horizon, concerns arise about AI increasing the “digitisation” of learning at the expense of socialisation and critical thinking skills, while also potentially disrupting job markets and education pathways.

  • Over 10 years, AI-powered learning is expected to be more integrated into classrooms, potentially replacing teachers and posing risks to social skill development, but also creating opportunities for lifelong learning.

  • Policy implications included considerations for equitable access to education equipment, guidelines for AI use in educational settings, protecting educational roles and incorporating AI into the curriculum.

9.2. Current perspectives on AI in education

Participants used or interacted with AI in education in one of two ways: either through self-directed learning (such as a language or coding), or through formal education (schools and universities). Participants tended to talk about the potential for AI in self-directed learning in a positive light, while with formal learning (schools, universities) there was less positivity.

In terms of personal learning, participants shared examples of using AI for a range of learning tasks, including getting AI to write test questions or projects, retrieving and summarising information, taking courses run by AI avatars, and using tools like Duolingo or ChatGPT. Participants tended to feel that AI made their self-directed learning more tailored and specific to their circumstances, better embedding information.

“Education-wise it’s coding for me. So, when I’m programming, if I want to debug some certain thing, then I’ll look that up with ChatGPT.” - Male, 25-34, Scotland

“Mainly the chat box like ChatGPT to explain some content that I don’t understand. The popular prompt that I give it is, ‘Explain it like I’m five.” - Male, 25-34, South East

However, some participants expressed concerns over purely AI-driven learning, noting that they would like a human to talk through problems or solutions in more detail particularly for more complex topics. This would also mitigate the risk of AI sharing incorrect or misleading information.

“I was looking at, sort of, data analyst courses, so it is something that I think AI could definitely help with, but I’m more thinking of my style of learning. I would prefer to have someone to actually chat and talk a problem through with” - Female, 35-49, North East and Cumbria

Participants also highlighted the potential for AI to enable better learning for groups with special requirements. These could be children with different learning styles, or those with disabilities that limited the ways in which they could learn. This could have significant, positive outcomes on the education of these groups by using methods that are currently challenging or impossible to implement in traditional education systems.

“My son, who attends a special school… within special education it’s a massive help, this AI. Because he has less meltdowns, he’s causing other people and himself less damage, because he can use this programme.” - Female, 25-34, West Midlands

However, participants raised a range of concerns about the increasing use of AI in formal education settings, particularly primary education. It was suggested that using AI to do homework could encourage laziness and result in lower levels of critical thinking. Concerns around integrity and cheating were also raised in the context of formal education, with some suggesting they would have been tempted to cheat if AI had been available for them. Many felt that AI-enabled technology provided a means for avoiding work, rather than to learn and improve. They also suggested that methods to distinguish AI written outputs from human written ones would become worse as AI improved. Participants discussed the overlap between genuine use and abuse of AI in education, where there was felt to be some grey areas. Even if AI is used in a legitimate capacity, the ability for it to perform tasks for students with minimal input was said to be reducing the incentive to understand topics in depth. Some participants also expressed concerns about AI embedding wrong or bad information (e.g. historical facts) and emphasised the importance of continuing to rely on multiple sources of information.

“The challenges at the moment in tertiary education are numerous, but I think you want to boil it down to academic integrity. What constitutes a helping hand in terms of producing academic work, and what constitutes over-reliance?” - Male, 50-64, South East

9.3. Future expectations - 2 years

Over the 2-year time frame, participants expected AI to become more integrated into learning solutions – particularly for self-directed learning. Participants generally felt this was positive, and that AI has great potential to deliver personalised learning experiences, such as providing materials to suit specific learning styles. Some participants suggested we might start to see AI platforms used for tutoring – these might be able to walk you through examples and focus on areas you are struggling with. Another perceived advantage of these developments would be flexibility, as people could fit their learning in and around family commitments. However, some participants suggested that there would be limitations as to the value of this learning, and that it would not be a supplement for in-person teaching.

“We’ll see a lot more companies advertising, sort of, one-to-one tutoring services using AI that will promise great results. But I personally feel that there will be limitations to what it can teach.” - Male, 25-34, South East

Participants raised concerns about the potential for AI to increase inequalities in access to education, even if AI itself made learning more inclusive. For example, locking AI-powered features behind a paywall, rather than having them free for all users, on educational software like Duolingo, potentially exacerbating economic divides in access to educational resources. This concern extends to mainstream education, with better-resourced schools and thus their students potentially having an advantage.

“Educations already stacked against poor people, and poorer people in society, because they don’t have study space. … AI could help that, or it could be prohibitive, based on access to tools.” - Female, 50-64, South East

“During Covid, that [inequality] was very evident, the people who have and the people who haven’t. So, they haven’t got access, so there are just massive issues around that and that’s going to be ongoing with AI.”- Female, 50-64, West Midlands

Considering formal learning, some participants felt that, though AI offered opportunities for a more tailored experience, there were practical barriers to implementing it in the short / medium term in schools. For example, the financial costs and technological considerations for assuring all students have access to computers in the classroom. Instead, participants felt AI would be better suited to help teachers plan lessons and for students to use for their homework. One participant, a teacher, said the current problems in teaching are not ones that AI could help with. Major issues actually centred on classroom management, behaviour, and relationships.

“Even if they had their own laptop, their own iPad that had an individualised programme, the kids I’ve got that I’m thinking of, they’ll need me to help them to access it or me to sit there and keep them in the classroom. AI is not going to help with those. It’s all about personal relationships, teaching. AI hasn’t got that. That’s the very first cornerstone of teaching.” - Female, 50-64, West Midlands

9.4. Future expectations - 5 years

Over the 5-year horizon, many participants raised concerns about AI increasing the “digitisation” of learning, with more emphasis on screen time and online learning environments. This was felt to be the opposite of what school should be, where the emphasis should be on socialisation and interaction, particularly for younger children. Some participants suggested that attempts to lower or reduce costs at schools could lead to teacher’s being replaced with AI-powered learning tools, which would be detrimental to student learning and welfare. With fewer teachers and less interaction with classmates, communication and interaction skills would be blunted, alongside other more ‘tacit’ forms of learning. This was an area in which participants felt strongly that relevant policy measures to embed AI literacy in schools should be weighed against risks to other, “softer” skills.

“Kids need to really learn how to deal with each other, how to be part of society, how to make friends, basically. So, here, I share the reluctance about AI in this particular stance.” - Female, 35-49, Greater London

Participants suggested that, in the medium term, there would be a greater threat to critical thinking skills as the use of AI for learning became more embedded. As in their current experiences, participants suggested that incentives to engage in depth and detail were being reduced when the work was being done for us. However, some participants suggested that new sets of skills would become apparent over the medium term. One participant raised the example of the calculator – at its initial introduction, basic mental maths skills, those the calculator replaced, were lost or lessened. However, people thought there were differences of scale between a calculator and AI due to the scale of information that AI could provide and tasks it could do. Some participants emphasised that learning would have to shift to understanding how things worked, rather than how to do them, citing changes to learning in maths and statistics as examples.

Participants also suggested that, over the five-year horizon, we might see the effects of disruption in certain job markets. This would mean that adults might need to be upskilled and reskilled for new work. Some participants highlighted the irony that AI would better equip people to pursue self-directed upskilling and learning while displacing their existing work. At the formal education level, this might also mean refocusing the curriculum around new and upcoming work streams. Some participants felt that we are currently educating our children for jobs in today’s workforce, not tomorrow’s which was a concern. At the tertiary level of education, this might see reduced numbers of students going to university as the post-graduate job market became more competitive and there was less certainty that career paths would exist or that people would stay within them.

“Because I think that there is no longer a strong enough argument for higher education, as a guarantee of a job for life. There are no jobs for life, essentially, any more. So, if it could be reimagined in, kind of, more of an apprenticeship model, using AI as a way to bridge things.”- Male, 50-64, South East

9.5. Future expectations - 10 years

Over the 10-year horizon, participants expected AI-powered learning to be more integrated into the classroom environment. Some participants expected this to be extreme; each student with their own screen, being instructed by AI teachers, and with entirely personalised learning plans. Participants felt that this might pose a risk to teacher’s jobs, as cost-savings were prioritised.

“Everyone is sitting in the class in little pods. Everyone individual has their own headset and depending on their learning preferences, they just do an essay of 5-grader or an essay of third grader and they do everything in the little pod and then the AI teacher pops onto the screen, like, ‘Oh, you haven’t done it right, my dear. So, have 15 minutes, let’s just redo it and focus on your objectives.’” - Female, 35-49, South East

“I think in secondary a lot of the lessons are going to be AI generated so it’ll be for each subject. Secondary teachers could become null and void.” - Female, 50-64, Scotland

As in the five-year scenario, participants raised concerns about the effects of these developments on personal interaction. As AI-powered learning tools could only be delivered via screens, it was felt the scope for integrating AI-powered learning inside the classroom without necessitating an increase in screentime was limited. Over the 10-year period, participants worried that basic early social skills could be under significant threat with the increased use of technology in the classroom.

“I think the biggest problem for me with AI relies on technology… we say children spend way too much time on devices already.” - Female, 50-64, West Midlands

Some participants felt that AI would create improved opportunities for lifelong learning, which would increase in importance over the 10-year timeline as the population ages. This might be a new foreign language, or new hobbies and skills. Accessing resources for this learning will be easy and cheap with the proliferation of AI tools. Some participants suggested that this might have health benefits.

“Because kids at a young age is one thing, but then throughout their life, I think people do need, and will need even more, continuous learning in different fields…. learning another foreign language, as an example, really improves the cognitive skills of the person and the brain function.” - Female, 35-49, Greater London

9.6. Conditions for acceptability

Participants generally found the use of AI for self-directed learning acceptable, as they expected this to improve outcomes by providing more tailored resources. However, some conditions for acceptability included:

  • Ensuring fair access to these resources: Participants expected that the best AI tools might be reserved for paid or premium plans, and thus not available for the less well-off. Ensuring that access to these tools was fair and equitable in schools was seen as particularly important to get the best educational outcomes for the many. Some participants set this within the context of poor educational performance at the international level. They believed that AI, used in the right way, might be able to close these gaps.

  • Maintaining human interaction: Participants tended to think about this in two ways. First, some noted that the guidance and support of a teacher was key to learning and could not be replaced by AI solutions. Participants agreed that AI should be used to augment or improve human-led learning in schools, not entirely replace it. The second was the worry that increasing AI in schools would lead to more ‘digitisation’ of learning, and thereby have a detrimental impact on the development of social skills in formative years. Participants felt strongly that the use of AI in schools should not take away from talking, interacting or debating with other students. Participants thus recognised a trade-off between an increasingly tech-oriented world, which we should prepare children for, and one in which they should continue to learn basic social skills. Participants also felt that AI should be integrated only where it adds value, which is in certain subjects like maths and history, but not in subjects where more creativity is required (such as English).

9.7. AI in education – public scenario

The visual and narrative below summarise how the public expect AI in education to develop as well as the skills necessary to support this. Over the next two years, the public expects AI to offer more personalised learning experiences, particularly in self-directed learning. While AI could help teachers plan lessons and students with homework, practical barriers such as financial costs and technological considerations may hinder its implementation in schools.

In the five-year horizon, the public anticipates that AI could increase the “digitisation” of learning, potentially at the expense of socialisation and critical thinking skills. There are concerns that attempts to reduce costs in schools could lead to teachers being replaced by AI-powered learning tools, which could be detrimental to student learning and welfare. Additionally, AI may disrupt job markets, necessitating the upskilling and reskilling of adults, and potentially leading to a refocusing of the curriculum around new and upcoming work streams.

Looking ahead to the ten-year horizon, the public expects AI-powered learning to be more integrated into the classroom environment, potentially replacing teachers and posing risks to social skill development. Some envision extreme scenarios where each student has their own screen and AI-generated lessons, with personalised learning plans. However, AI could also create opportunities for lifelong learning, particularly as the population ages. These themes, and scenarios, will be explored further with a professional audience who work in the education sector in WP6.

Figure 9.1: Visual Scenario depicting AI in education for the general public

Table 9.1: Narrative for AI in education


Narrative
Capability for skills development
Critical and creative thinking: AI risks reducing the incentive to think critically evaluate or think in creative ways. Ensuring AI is used to improve and personalise learning, not replace it.

Integrity: Avoiding over-reliance on AI and maintaining academic honesty.

Social skills: Balancing AI use with in-person interaction and discussion, particularly in early formal education.

Adaptability: Learning new skills as job markets shift due to AI disruption and making sure these are part of curriculums.
Skills Overview
High capability in self-directed learning: Self-directed learning platforms already integrating AI for personalised experiences, and flexibility in AI allows for flexibility in learning.

Lower capability in formal education: Schools and universities beginning to utilise AI to help plan lessons and provide student support, but ability to use AI in the classrooms limited by practicality and cost. Questions remain around teachers’ willingness and schools’ technological resources to implement AI effectively.
Value alignment
Inclusive for all people to interact with AI: Participants raised concerns about the potential for AI to increase inequalities in access to education. They worried that better AI-powered learning features might be locked behind paywalls or only available in better-resourced schools, exacerbating economic divides in access to educational resources.

Human control behind the tools: Participants felt strongly that AI should augment human-led learning, not replace it entirely, and that teachers should maintain control over how AI is integrated into the curriculum to ensure it adds value to learning outcomes while mitigating risks like overreliance on AI for tasks that could soften critical thinking skills.
Opportunity
Personalised learning: AI can tailor educational content and pacing to individual learners’ needs and preferences.

Enhanced self-directed learning: AI can provide immediate feedback and support for learners pursuing independent study.

Efficiency for educators: AI can assist teachers with tasks like lesson planning, grading, and providing targeted student support.

Lifelong learning: AI can facilitate continuous learning and upskilling as workforce needs evolve.

Inclusion: AI-powered tools can better accommodate diverse learning styles and needs, including for those with disabilities.
Challenge
Inequality: Potential for increased educational inequality if advanced AI tools limited to premium/paid options.

Threats to key skills: AI potentially enables cheating and may erode critical thinking if used as a crutch

Over-digitisation of learning: AI learning is likely to increase screentime and may reduce important social skills development if used as a substitute for interaction.

Disruption to job markets: requiring significant upskilling and curriculum changes.

Industry perception: Negative perceptions and risk-averse behaviour holding back adoption / use.
Policy Impact
Equitable access: Ensuring equal access to AI-powered educational tools between income groups, especially in primary and secondary schools.

Maintaining integrity: Guidelines needed on acceptable use of AI in education and examination to maintain academic integrity

Protecting teaching / educational roles: There will be incentives to replace / reduce teaching roles as AI becomes more capable. Participants were strongly in favour of using AI to assist teachers, not replace them.

Curricula amends: Adapting curricula and enabling lifelong learning as AI shifts workforce needs.

10. AI in Travel / Transport

10.1. Overview

Key Findings * Participants were generally positive about the potential for the use of AI in travel and transport, particularly for improvements in local transport systems, costs, emission reductions and greater access for those who are disabled or live in areas with limited availability of public transport.

  • However, there was large concern over the safety of self-driving vehicles, with participants valuing human control in potentially life-threatening situations. They also raised concerns about job losses in these industries and the subsequent retraining of these individuals.

  • Within 2-5 years, participants did not expect significant developments in AI technology relating to transport. Some suggestions included AI-powered transport apps becoming more optimised and enhanced, and that usage of these apps by the public would increase.

  • Within a 10-year timescale, participants foresaw the growth of self-driving car use and predicted that these would make the roads safer than human drivers.

  • Policy implications may include the loss of essential life skills like driving and hazard perception, and potential job losses in these industries.

10.2. Current perspectives on AI in travel / transport

Participants mentioned a range of current uses of AI in travel and transport, including drones, autonomous vehicles like Tesla, smart motorways, route planning systems using Google Maps or Waze, and recommendations for holidays (such as Lastminute.com). Due to the scenarios which required them to travel, there was often overlap between AI technology and other technological advances. One participant used an app powered by AI which can plan an itinerary for a trip based on interests, budget and the hours of the day they want to explore which helped to save them time. While another referenced their recent purchase of an electric car which they thought was using AI to adjust speed, alert to oncoming traffic, and steer itself. A third referenced an AI-powered app used by a hotel which had a chatbot to request staff assistance throughout their holiday.

“They had an app for the hotel, you logged in, it had your room information, and then, there was, like, an iRobot, it was, kind of, a chat-box. So, if you wanted something, like, an extra towel or a pillow or something like that, you’d type it in, and then, it would, sort of, send a message to the staff and they’d bring it to your room.” – Female, 35-49, Yorkshire and the Humber

Participants’ initial concerns in this space were about space. They raised concerns about the safety of self-driving cars, having heard of incidents where people died or were in accidents. There was uncertainty about the hazard perception of these vehicles, and whether this was safer than human use. However, some participants speculated that in the future, this issue may be less prevalent if more people are using self-driving cars, as the cars may be better able to communicate with each other.

“I think if it becomes more widespread and everyone does have a self-driving vehicle, however many decades that will be, I think it will possibly be safer.” – Male, 25-34, Yorkshire and the Humber

When they imagined the future, participants’ core values centred around increasing accessibility and sustainability whilst maintaining human control over AI-powered systems. They identified specific features they would like to see in the development of transportation and travel, including AI being rolled out across trams, buses and trains to improve the network by making journeys more efficient and less stressful.

In terms of skills, participants thought that use of AI to optimise routes and smart roads could be implemented without any new skills being required. In a similar way to AI in the home and with personal devices, they felt that these systems would be designed to be user friendly and intuitive, so that the public do not need to upskill to use them.

“If it’s [AI in travel] designed in a way where it’s implemented well… I think it’ll just be a case of push this button, follow this process map and the thinking is done for you, in a way… I think, it’ll be simpler for consumers,” – Female, 35-49, North West

Conversely, participants were also concerned about the loss of skills, once deemed essential, that AI use in this area could lead to. One group noted that increased popularity and use of self-driving cars will mean that fewer people will have the ability to drive a manual car, having already seen this occur due to automatic cars. Alongside this, they discussed the need for policy interventions here, with the need to adapt driving tests to ensure they cover hazard perception in relation to interacting with self-driving cars.

Additionally, participants raised the issue of reskilling drivers and workers in these fields whose jobs could be displaced by the introduction of AI-powered transport and travel. They felt that, as many industries are now highly skilled, candidates would hold academic qualifications, making any retraining expensive and difficult. They felt that any reskilling provided would be unlikely to lead to jobs which pay them at a similar level. This could lead to unrest, with one participant drawing comparison between transport workers and miners in the 1980s.

Overall, participants expect that as AI develops, transportation and travel will become more efficient, leading to knock-on benefits for the public such as reduced costs and increased safety.

10.3. Future expectations - 2 years

Participants expected developments in AI to increase use of public transport in local areas, at the expense of personal vehicles. Participants highlighted positive benefits of these shifts on health and wellbeing. Participants also expected AI to improve airport procedures over the short term horizon, with safer security checks and more efficient flight systems.

Efficiency was also expected to be improved in terms of traffic, where participants expect AI to improve navigation systems, offering alternative routes where there is a delay or accident. This tool was also expected to be useful for AI applications which offer route planning, such as Citymapper.

Some participants were sceptical about the pace of AI integration in this area. They expected AI development, and the logistics of implementation, to be more significant in the areas of travel and transport than other sectors. They suggested that it would take time to conduct further research to determine which steps could be carried out.

“In terms of the measurement side, so I think it will be they’ll be collecting more information in order to determine what the next steps over the next steps over the 5 to 10 years plus will be.” – Male, 25-34, Yorkshire and the Humber

As such, aside from the enhancement of the performance of AI-powered transport applications, participants did not expect to see large changes in AI use in travel and transport within the next 2 years.

10.4. Future expectations - 5 years

Participants expected navigation applications, like Citymapper, to significantly improve over the next 5 years. Participants predicted that as these apps would be able to capture more data about movement and trends, and offer better, more efficient routing.

“I think the tools that will be open to you are going to be far better… things like that [Citymapper] are just going to become more and more prevalent and present as the data that’s being captured improves… It’s not something that’s very easy to extrapolate… you have to capture that data on the ground, so I think things like that are just going to improve” – Male, 25-34, South West

Concerns were raised that over the 5-year timeline, the introduction of AI into many transport routes would lead to a loss of human involvement. This might negatively impact those with mobility or support needs (e.g. onboarding passengers), particularly on public transport where staff numbers were expected to fall. As with other topic areas, maintaining direct human involvement was felt to be key to mitigate risks associated with increased AI adoption and automation.

Participants recognised that there may be regional differences with how AI develops, exampling the use of smart roads. It was felt that cities like London and Manchester would see a larger impact of this than rural areas like Scotland.

“I just can’t see how it would be able to gather enough data from the roads up here [Scotland] anyway.” – Non-binary, 35-49, Scotland

The need for training pathways to be implemented within this 5-year timeline to reskill employees in the transport sector was also raised, to prevent the loss of jobs or unemployment as the roles within these industries change. This was recognised to be a key policy implication to consider, with participants expecting both the government and companies to take a leading role in delivering initiatives to ensure that employees are reskilled.

Overall, participants expected improvements to be useful and efficient, and so make a positive impact across their lives in the next 5 years but were also cautious about the loss of the human element of transport and the risk to jobs in this sector.

10.5. Future expectations - 10 years

Some participants did not expect there to be large-scale changes to the transport industry due to AI between 5 and 10 years. They did expect that transport applications would become better optimised with the development of AI technology but did not foresee any larger changes. For example, some participants were more sceptical about the speed of the rollout of driverless vehicles, doubting mass uptake within 10 years. Instead, they presumed the priority would be moving to electric car use and shifts in public transport. More generally, some of these participants agreed that larger changes to transport related to AI would take 20 or 30 years to materialise given the infrastructure, finance and societal changes to implement.

“So, from a public transport point of view, I feel like even tech, I think probably 20, 30 years. That’s when they might start being mass changes happening, but even 5, 10, I don’t think anything from a public transport view will change drastically.” - Male, 25-34, Yorkshire and the Humber

Participants who expected AI to have a larger impact over this timeframe, as with other areas, blended broader technologies with changes in AI. This was a subset of the group. One example was the idea of VR holiday experiences becoming an option. Participants were generally concerned about growth in immersive AI experiences, highlighting risks of increased human isolation and worsening mental health.

“Virtual travel through AI…So, you don’t actually need to go. You can immerse yourself in a virtual experience of wherever you might want to be. And, of course, that can be marketed without the inconvenience of leaving your home, having to travel. So, I wonder if that might be something that the travel industry considers.” – Male, 50-64, South East

Despite recognising that developments in the transport industry which would involve more AI-powered products and infrastructure that could require the upskilling of software engineers, participants felt that the general public would not require new skills. Instead, they saw the public as passive consumers, and as such the technology they would begin to use over the next 10 years should be user friendly.

  • Extensive testing of self-driving cars: Participants referred to the need to be able to trust AI-powered transport such as self-driving cars. To enable this trust, they discussed their expectation that they should be comprehensively tested in all possible scenarios for people to be able to rely on them. Since the format of these new vehicles is unfamiliar, this is a concern that participants felt efforts would need to be taken to ensure they were at ease in these vehicles. Alternatively, others felt that trust in AI-powered transport would grow more gradually with experience and exposure. They discussed how the increased prevalence of these cars would encourage people to become more accepting of using such vehicles.

  • Maintaining human control: Additionally, as in other areas, the importance of assuring the maintenance of human control was important in this topic area. Participants were more accepting of accidents resulting from human error than those made by a machine. Participants highlighted uncertainty around who would take responsibility in the instance of driverless car accidents.

10.6. Conditions for acceptability

One of the main trade-offs participants discussed was whether they would want this level of investment to be handled by the government or the private sector. They believed that the government would assure them a level of security and possibly reduced pricing, such as if they were in charge of a driverless car hire system over a company like Uber. However, the private sector was more likely to continually invest in the development of the technology, leading to a more effective service.

  • Extensive testing of self-driving cars: Participants referred to the need to be able to trust AI-powered transport such as self-driving cars. To enable this trust, they discussed their expectation that they should be comprehensively tested in all possible scenarios for people to be able to rely on them. Since the format of these new vehicles is unfamiliar, this is a concern that participants felt efforts would need to be taken to ensure they were at ease in these vehicles. Alternatively, others felt that trust in AI-powered transport would grow more gradually with experience and exposure. They discussed how the increased prevalence of these cars would encourage people to become more accepting of using such vehicles.

  • Maintaining human control: Additionally, as in other areas, the importance of assuring the maintenance of human control was important in this topic area. Participants were more accepting of accidents resulting from human error than those made by a machine. Participants highlighted uncertainty around who would take responsibility in the instance of driverless car accidents.

10.7. AI in travel / transport – public scenario

The visual and narrative below summarise how the public expect AI in travel and transport to develop as well as the skills necessary to support this. Within a 2 year- period, participants did not expect to see many drastic changes to the travel and transport industry. They recognised that AI-powered apps would become more advanced in their capabilities and that airport security processes would become more streamlined.

Across the next 5 years, some participants discussed the possibility of driverless buses, trams and tubes coming into existence, with the need for reskilling to reduce job losses here. In terms of skills, they felt that the general public would not need to develop any new skills, and that instead it would be the developers that would advance in their capabilities.

By 10 years’ time, participants envisioned that AI could revolutionise the holiday market, by introducing the possibility of virtual travel allowing for cheaper experiences. However, they were clear about the need to ensure that AI would be introduced to complement the transport industry, rather than replace existing jobs and systems. These themes, and scenarios, will be explored further with a professional audience who work in the travel and transport industry in WP6.

Figure 10.1: Visual Scenario depicting AI in travel and transport for the general public

Table 10.1: Narrative for AI in travel / transport


Narrative
Capability for skills development
Limited skills need: Participants did not think that they currently required any extra skills to use AI-powered transport.

Stakeholders responsible for AI development: They believe the skills required in this space are more aligned with stakeholders than for the public with a greater focus on behaviour change for the public.

User friendly software: They felt that developments would be relatively slow, and so the public would have time to adapt to using new software and AI-powered transport, not requiring any upskilling.
Skills Overview
Limiting the loss of skills: Participants instead wanted to prioritise ensuring skills are not lost in the process of transitioning to AI-powered transport, including manual driving skills.

Reskilling employees: They also recognised the need to reskill those employed in these industries as they foresaw the potential loss of jobs due to AI displacement.
Value alignment
Safety of passengers: Participants were very focused on the importance of ensuring that self-driving vehicles are entirely safe and can interact with human drivers.

Human control: Ensuring that an element of control is maintained in the future, whereby humans have the power to override AI systems; participants were more trusting of human judgement than that of a computer.

Opportunity
Identifying non-AI roles to maintain: these roles can focus on the maintenance of passenger safety and security which have been highlighted as an area of importance.

Increasing accessibility: These developments may enable those with accessibility requirements to get around easier and more independently.

Positive climate impact: They also believed that AI could enable the transport industry to become more sustainable, with route optimisation and improving the efficiency of public transport.
Challenge
Industry acceptance: Prevention of strikes / uprising from those that work in this space by meaningful engagement and retraining opportunities.

Safety and security of passengers: Ensuring this is maintained across the transport infrastructure and that human control remains the core value of the industry.
Policy Impact
Potential job losses in these industries: Employees will need reskilling, which will ideally occur through government initiatives.

Driving tests: Adaptation to align with autonomous vehicles, particular focus on hazard perception which may require human intervention.

11. AI in Healthcare

11.1. Overview

Key Findings

  • Participant’s main interactions with AI in healthcare was through chatbots for symptom checking and early AI-assisted diagnostics. While AI is expected to improve accuracy and speed, concerns arose about the loss of human connection and empathy in vulnerable health settings.

  • Within 2 years, participants anticipate AI integration in robotic surgery, preventative care apps, and improved diagnostics. Healthcare professionals will need training to work alongside AI effectively, while patients may require digital literacy skills to use AI-driven apps and devices.

  • Over 5 years, AI is expected to enhance efficiency in healthcare administration, advance drug manufacturing, and improve patient outcomes. However, concerns emerged about the deskilling of healthcare professionals and potential job losses due to AI automation.

  • In 10 years, participants foresaw significant integration of AI robotics in healthcare and expected improved health outcomes. However, they raised concerns raising concerns about reduced human control, empathy, and inequalities. Implications for policy include prioritising guidelines for human oversight in AI-assisted healthcare, ensuring equitable access to AI tools across regions and demographics, implementing robust data protection and cybersecurity measures, and adapting medical education to prepare professionals for AI integration.

11.2. Current perspectives on AI in healthcare

Participants currently interacted with AI in healthcare primarily through AI-powered chatbots for symptom checking and fledgling AI-assisted diagnostics. Participants generally found these developments encouraging, expecting them to lead to more accurate diagnoses and get their symptoms checked faster. However, some participants expressed concerns about the loss of human connection and empathy. This was felt to be particularly important in moments of vulnerability, which healthcare typical was.

“I really objected to it – didn’t want it to be putting [my info] in online… there was an element of lacking human empathy and connection.” - Female, 65+, London

Participants also shared experiences of AI being used in diagnostics and operations, with the potential to lead to more holistic treatment and joined-up care in a disjointed system. Some expected diagnoses by AI, even at its current stage of development, to be more accurate than a human doctor. However, participants generally agreed that the combination of human expertise and AI reduced the chance of missing important details.

“I know in medicine, some AI tools have been shown to be more accurate than doctors at detecting cancer or detecting some other stuff, so I’m hoping that in the future we can replace doctors or radiologists looking at scans and just be able to give it to an AI.” - Male, 25-34, North West

Some participants were able to give specific examples of the current use and early adoption of AI in the NHS. One participant who worked for the health sector highlighted a pilot programme to help reduce outsourcing costs and help prioritise client information to ensure that they were handled based on urgency. They also expressed a general interest to use AI to speed up processes and reduce repetitive tasks more broadly, including their own work. Another participant was taking part in a medical trial to test AI-powered diagnostics for predicting cancer. They felt encouraged by the research and saw it as proactivity on the part of the healthcare system.

“We’re currently trying to pilot implementing an AI system to analyse CT scans in order to prioritise them. So, rather than a CT scan being done by a radiographer and then waiting to be reported on, the AI system will actually review the scan and prioritise it… it reduces our cost of outsourcing reporting.”- Male, 25-34, Yorkshire and the Humber

Some participants raised the integration of AI into personal health or tracking devices, such as watches that connected to health apps. Participants tended to feel confident in using these devices, as the technical skills required were relatively low. However, a few participants were concerned about their personal data and privacy. One participant noted that this tracking might affect your prices for healthcare insurance, as providers would have more information on your risk factors. They observed this already occurring amongst some providers. Some participants raised ethical concerns about this process, disadvantaging those who were already sick or vulnerable. There was also some concern AI might increase health outcome inequalities amongst those without access to technology over the short term.

“There is already a company, or some private healthcare companies will incentivise you, they will give you one of these Apple Watch things and that if you do a certain amount of exercise per week, they will drop your insurance premium. So, already using tracking devices to incentivise cheaper, private healthcare coverage” - Female, 50-64, South East

11.3. Future expectations - 2 years

Participants expected significant developments in AI-powered healthcare solutions even over the two-year horizon. Some suggested that AI might begin to power existing robotic technologies, such as keyhole surgery. Participants had varying levels of confidence in the ability for AI to deliver this type of work, but generally felt that some human oversight would continue to be necessary. Some participants raised concerns about an over-reliance on technology powering healthcare, as this could increase the risks associated with IT blackouts or malicious cyber-attacks.

“I’ve been lucky enough to be in the operating theatre when these robots have been used and there’s always a surgeon watching the robot.” - Female, 25-34, West Midlands

Participants also foresaw a growing trend in personal preventative care through health apps on smartphones and tracking devices. Some were already using these apps, like Zoe, and suggested that this seemed an area ripe for AI integration. They cited the current example of diabetes management and expected this to become more advanced and expand into other areas, such as broader dietary health, over the next two years. Similar to other topic areas, participants did not expect the skills needed to use these apps to be preventative or challenging. Participants were more concerned about the potential cost of these AI-powered options, and them being unaffordable for most consumers.

“I think AI is also coming into the prevention side of things as well as cure… these apps that will monitor what you eat, they make recommendations. I’ve seen some things that do blood sugar work and whatnot, and they make recommendations for your diet on the basis of that.” - Male, 50-64, South East

In terms of diagnostics, participants anticipated improvements, particularly in detecting serious health conditions that might only be showing early signs of development. Participants tended to be more confident in the use of AI for diagnostics than they were for surgery, noting that AI-powered systems could learn and identify issues faster than human doctors, preventing deadly issues from being stuck in backlogs.

“They discovered a pulmonary blockage in the lungs… You get an X-ray, you wait 1 week, 2 weeks for someone to get back to you. When realistically, a computer can just recognise that and deal with it. I just think those sorts of AI improvements, where it will save time for the NHS and it definitely needs it, is always going to be a good thing.” - Male, 18-24, South West

In terms of skills, participants’ expectations for the development of AI-powered healthcare solutions over the next two years suggested a growing need for healthcare professionals to be trained in working alongside AI systems. They will need to understand the capabilities and limitations of these technologies and be able to interpret their outputs effectively.

11.4. Future expectations - 5 years

Participants tended to be positive about the integration of AI into healthcare over the five-year horizon. They expected significant improvements in healthcare outcomes, particularly advancements in drug manufacturing, and the efficiency of booking and management systems. However, concerns were raised about the deskilling of healthcare professionals and the threat to their jobs.

Participants saw significant opportunities for increasing efficiency in organisation and administration within the healthcare system. They believed that AI could free up time for doctors and nurses to focus on patient care. This would come via automated, flexible systems for booking appointments and coordinating treatment pathways. AI might also help with hospital logistics – one participant raised a system they were working on integrating into a hospital which looked for efficient ways to manage hospital beds, which would only be improved with AI. Some participants noted that these cost savings would need to be taken to ease increasing pressures on healthcare funding.

“I think any government that’s able to implement AI in a way that eliminates the backlog, and that introduces that kind of processing efficiency, I think, would be hugely impactful and, obviously, very motivated to do that.” - Female, 35-49, North West

However, some participants raised concerns that the pursuit of administrative efficiency and cost savings through AI might put jobs under threat. Some participants mentioned the possibility of replacing receptionists in local GP practices within the next 2-5 years with AI enabling more efficient booking systems and prioritisation of tasks. Some participants felt this could put pressure on re-skilling and retraining a large collection of workers.

However, participants raised concerns about the quality of skills for general practitioners (GPs) being under threat over the five-year horizon. They observed that GPs were already becoming overly reliant on technology, including looking up information in front of patients, which made it harder to trust their knowledge. Some participants feared that the increasing use of AI would reduce incentives for healthcare professionals to develop and maintain diagnostic and analytical skills. Participants also highlighted routine GP appointments as potentially deliverable through AI, threatening GP jobs and the wider industry. This was based on how participants currently saw GPs conduct online research in front of patients, decreasing trust in their head knowledge. However, some participants suggested that GPs have a level of flexibility and adaptability that AI might lack, especially when faced with unfamiliar situations which is important for accurate diagnoses.

“As medical professionals you are taught to not be afraid to question somebody else’s judgement. So, if you can’t even question a computerised judgement because you don’t have the experience, then it’s a downward slope from there.” - Female, 35-49, Yorkshire and the Humber

“A webchat using AI would help with getting GP appointments and prescriptions and not having to wait to speak to somebody, would take 5-10 years before AI is intelligent enough for this to work. It would improve efficiency, and you would be able to speak to a GP at more flexible hours.” - Female, 25-34, East Midlands

Participants also identified drug manufacturing as a key growth area that could be enabled by AI, with particular hope for advancements in cancer treatment. Participants generally felt that the focus of AI would be on drug development for serious illnesses with currently limited treatment plans, such as cancer and dementia. These developments could also save the health service significant money in the treatment of patients over long periods.

11.5. Future expectations - 10 years

Participants had a range of expectations about the impact of AI on healthcare over the next ten years. While they saw the potential for significant advancements in AI-powered robotics and diagnostic tools, they also expressed concerns about the risks associated with reduced human control, empathy, and recognition of people in vulnerable states.

Participants generally expected a significant integration of AI-powered robotics and solutions in the long-term, with potential benefits including cost savings and efficiency. This included surgery and automated GP appointments, but also additional services in hospitals, such as food provision, progress updates, and human interaction. A small number of participants suggested almost human-free hospitals, with the jobs being limited to management of robotics and AI-powered tools rather than humans complementing them.

Participants raised concerns about the risks around human control and the loss of empathy owing to these developments. Participants suggested that fully AI-powered solutions might have detrimental effects on mental wellbeing, as they lacked the sense of connection between two people. Participants generally suggested that cutting costs and improving patient outcomes was not thinking holistically enough about what care should look like.

“Having a meal served by a robot is fine, but it takes away one element of human interaction in a hospital that we need.”- Female, 65+, London

As in other areas, participants tended to mix or blend their expectations for the development of AI with other technological advancements. Some participants predicted significant improvements in healthcare, including AI-powered robotics for attending to patients or delivering complex surgery, such as bionic hearts. They expected this to increase accessibility for those with disabilities or debilitative conditions – particularly with technology such as brain implants, like NueraLink.[footnote 2] Relatedly, some participants noted that AI might be able to power more specific care for those in need, such as universal British Sign Language (BSL) translation to help health staff communicate with the deaf community.

However, other participants were sceptical about the pace of these changes, even if the technology became available elsewhere. They were uncertain about the NHS and government’s efficiency in delivery and expected AI to move faster than the healthcare system’s ability to adapt.

“I think there’s a potential for the AI capability to outgrow the NHS’s ability to actually put it into practice. Just because of the mess it’s in and the fragmentation, how there’s all this potential for everything we’ve talked about, but I’m not sure that the NHS as an organisation is equipped to make it all happen as quickly as it can happen.” - Female, 50-64, South East

Participants also raised concerns about the potential for increased inequalities, especially based on regional health provision. They worried that big city hospitals would get all the advanced technology, while smaller regional ones would be left behind. These inequalities might also be realised at the personal and diagnostic level, with those able to pay for AI-powered healthcare accessing better care and increasing existing unequal healthcare outcomes. As such, even if AI could help mitigate debilitative conditions, these would not be widely accessible based on cost.

“The inequality is going to be more evident, and I don’t think the NHS can survive as it is, I think there are other healthcare providers, private healthcare provision will be able to create financial partnerships that perhaps the NHS can’t.”- Male, 50-64, South East

Participants also expected significant developments in the analytical and diagnostic power of AI tools, such as scanners that could identify a wide range of illnesses or ailments. Some participants envisioned this as a non-invasive machine that could perform a full body scan. A small number of participants expressed concerns that this might increase pressure to treat, as people would be more worried, thereby increasing healthcare demand. A small proportion of participants expressed concerns about the potential for AI to power developments around gene editing and eugenics, raising significant ethical questions.

“In 10 years’ time I think we’ll begin to hear the main streaming of conversations about eugenics because if we are able to tweak people’s DNA, if we’re able to manipulate things, then there are always going to be people interested in doing that I think the conversation about eugenics, it will be mainstreamed one way or another.” - Male, 50-64, South East

Concerning skills, participants tended to focus on the skills of those working in and around healthcare, as they believed the skills needed as a consumer of healthcare would be similar to now. The expected developments in AI-powered healthcare solutions over the next ten years suggested a need for healthcare professionals to adapt to working alongside increasingly sophisticated AI systems while navigating the ethical and social implications of these technologies. They will need to balance the potential benefits of AI with the importance of maintaining human connection, empathy, and control in patient care.

11.6. Conditions for acceptability

Participants were generally excited about the potential uses for AI in healthcare, executing it to deliver better patient outcomes and save money. However, across each of the 2, 5 and 10 year scenarios, they suggested four key conditions that would make each more acceptable:

  • Maintaining human control: Participants said ensuring that AI works alongside human healthcare professionals, rather than replacing them entirely, was important as AI became increasingly integrated into healthcare. This was to ensure that each was able to supplement / support the other in the instance of mistakes. It would also ensure that medical skills were not diluted or lost as doctors increasingly relied on technology.

  • Maintaining empathy and human connection: Participants worried that an over-reliance on AI and robotics could lead to a loss of the human touch, which is particularly important when people are vulnerable.

  • Equitable access to AI-powered tools and treatments. Participants raised concerns about the potential for AI to exacerbate existing inequalities in healthcare, with more advanced technology being concentrated in wealthy regions or among wealthy individuals. To mitigate these risks, participants suggested that policies and investments would need to prioritise equitable access to AI-powered healthcare solutions on a regional, income, and public/private basis.

  • Maintaining privacy and security: Cyber-attacks and data breaches could be more damaging given the sensitive nature of personal health information. Some participants also expressed unease about the potential for AI to be used to monitor personal health behaviours. To make the use of AI in healthcare acceptable, participants emphasized the need for robust data protection measures and regulations around the use of personal health data.

11.7. AI in healthcare – public scenario

The visual and narrative below summarise how the public expect AI in healthcare to develop as well as the skills necessary to support this. Within the next two years, the public anticipated AI integration primarily via preventative care apps and improved diagnostics. Healthcare professionals will need training to work effectively alongside AI, while patients may require digital literacy skills to use AI-driven apps and devices.

Over the five-year horizon, AI is expected to enhance efficiency in healthcare administration, advance drug manufacturing, and improve patient outcomes. However, concerns emerge about the potential deskilling of healthcare professionals and job losses due to AI automation. Balancing the pursuit of cost savings and efficiency with the need to maintain human expertise and adaptability will be crucial.

Looking ahead to the ten-year horizon, the public foresees significant integration of AI into advanced areas of healthcare – including robotics and gene editing. However, this raises concerns about reduced human control, empathy, and the exacerbation of inequalities based on access to advanced technology. These themes, and scenarios, will be explored further with a professional audience who work in the healthcare sector in WP6.

Figure 11.1: Visual Scenario depicting AI in healthcare for the general public

Table 11.1: Narrative for AI in healthcare


Narrative
Capability for skills development
Understanding use of AI: Public understanding of when their data is being used and how it is being used e.g. decision making or R&D.

Understanding AI outputs: Interpreting and critically evaluating AI diagnostic and treatment recommendations.

Working alongside AI: Integrating AI tools into healthcare practice while maintaining core clinical skills.

Empathy and human connection: Preserving compassion and personal interaction in patient care.

Ethical navigation: Addressing privacy, equity, and social implications of AI in healthcare.
Skills Overview
Healthcare professional training pipeline: Healthcare professionals might need retraining to work with AI more closely. Focus should be put on training on working with AI systems. Additional emphasis might need to be placed on empathy and connection.

Personal health tracking: Participants suggested that these were easy to use and based on existing digital competencies.

Ethics guidelines: Participants felt uncertain about developing ethics for use of AI, particularly in the long-term. Broader AI and ethics skills to be developed in education.
Value alignment
Human control behind the tools: Participants stressed that AI should work alongside human healthcare professionals, rather than replacing them entirely, to ensure that each can support the other in case of mistakes and to prevent the dilution or loss of medical skills as doctors increasingly rely on technology.

Inclusive for all people to interact with AI: Participants raised concerns about the potential for AI to exacerbate existing inequalities in healthcare, with AI-powered solutions available only in private hospitals or wealthy areas.

Privacy and Security of their information: Participants expressed concerns about the potential for cyber-attacks and data breaches, which could be particularly damaging given the sensitive nature of personal health information. Some participants also felt uneasy about the potential for AI to be used to monitor personal health behaviours.
Opportunity
Improve patient outcomes: Earlier diagnosis and decision making; R&D to cure decisions with greater impact for the public.

Increased efficiency: Across multiple strands, including logistics, admin, booking and resource allocation.

Personalised preventative care: Through AI-powered apps and devices.

Enhanced accessibility: For patients with disabilities or mobility issues via AI assistive technologies.

Accelerated drug discovery: Particularly for complex diseases.
Challenge
Maintain human connection and empathy: Using the freed-up time to spend with patients, important for sensitive tasks like delivering bad news.

Deskilling of healthcare professionals: Due to over reliance on AI, particularly for diagnoses.

Exacerbation of healthcare inequalities: Based on access to advanced AI technologies, across region, income and public/private healthcare delivery.

Data privacy and security risks: Especially concerning sensitive health information, which may be collected externally to inform things like insurance.

Ethical concerns: Particularly concerning advanced AI-enabled developments like gene editing and eugenics.
Policy Impact
Human control and AI influence: Guidelines for maintaining human oversight and control in AI-assisted healthcare, with clear expectations as to where AI is used and where it isn’t.

Equitable access: Measures to ensure equitable access to AI-powered healthcare solutions across regions and demographics.

Robust data protection regulations: And cybersecurity standards for health data used by AI systems.

Adaptation of medical education and training: To prepare professionals for AI integration. Ensure the preservation of critical human skills and judgment in healthcare delivery.

12. Conclusions and Implications for Policymakers

Participants co-produced priority policy areas which they felt should receive investment to ensure the public has the AI skills they require in the future.

12.1. Key findings

  • Equitable policy: Most participants supported AI solutions that benefited the majority of society, particularly in education and healthcare.

  • Life-long learning: Participants emphasised the importance of preparing individuals of all ages for an AI-driven society through exposure to AI tools through multiple channels including formal education, community and professional environments.

  • Healthcare investment: Participants were more supportive of investment in AI solutions to healthcare to improve efficiencies, and conduct research, but stressed the need for AI to work alongside healthcare workers for effective delivery of services and be properly regulated.

  • Non-partisan regulation: Most participants highlighted the need for a non-governmental body to regulate AI technologies ensuring they operated for the public good.

  • Industry future uncertainty: Many participants were uncertain about the long-term effects of AI on specific industries and expected a combination of corporate and government interventions to support reskilling and upskilling.

12.2. AI can be used, but the public has to trust that they will benefit from its implementation

Most participants supported solutions where there was a benefit for the majority of society. This included through education and healthcare. From an education perspective, participants discussed the importance of individuals of all ages being prepared to live in a society that could drastically change due to AI by establishing ‘life-long learning’. As such, it was important that young people in schools were exposed to AI tools and equipped with the skills to use AI later in life, both inside and outside of work. However, participants also expressed the importance of adults having opportunities for training on AI so that they do not fall behind. This was compared to training on general literacy that could be offered in community spaces alongside formal training for those in the workplace. From their perspective, this multiprong solution helped to ensure that individuals despite their age, profession, or location would have the opportunity to gain skills which would be useful for living with AI.

Healthcare was an area that participants supported investing in due to the wider impact on society. This includes the use of AI to improve efficiencies, deliver healthcare as well as conduct research and development. Based on ongoing government policy and media communications which highlight funding challenges and gaps with staff that some participants believe could be alleviated with the use of AI in this area. However, they reinforce their main condition of acceptability in this area (Chapter 10) that AI is used alongside healthcare workers and not solely relied on and for it to be regulated to ensure appropriate governance.

Across both of these solutions, a minority of participants were concerned about the impact of people who worked in industries which would use less AI or were in more rural parts of the country.

” Education and training from when you’re young to using it, so you know what you’re doing when you get into the workplace, up through colleges, universities, community hubs ‘ “ – Male, 50-64, East Midlands

“Giving opportunities to people, because you’re going to get a lot of people who are a bit disenfranchised, aren’t you? With this AI business, so for it to be affected by the population you’ve got to integrate it into education, so it becomes the norm, otherwise you don’t stand a chance.” – Female, 50-64, South East

12.3. Government is expected to regulate and protect AI, and ensure it is deployed effectively

Participants expressed that they would like to see the Government have oversight of AI technologies to ensure that technology is being deployed correctly and that the livelihood of citizens or the existence of entire industries is considered.

Regulation was a priority for most participants, with them highlighting the need for a non-governmental body to oversee the deployment of AI technologies. While some preferred for this to sit outside of government others were more concerned with the remit of the organisation.

Additionally, some felt that Government could play a role in overseeing the protection of jobs as roles or industries change with the use of AI. This could include providing AI training, re-training for those whose roles were affected, or working with unions to prevent strikes (e.g. trains) and discussing salaries in an effort to ensure individuals who are capable of doing so can still be economically active.

“If you invest in the labour market, you’re going to get the returns back, because people are going to be working, paying tax, money to spend on groceries and everything. So, I feel like that would pay itself off. “ – Male, 18-24, East Midlands

“I think they should audit individual AI research and things like that to make sure that they have human interests at heart when they’re programming them.” - Male, 18-24, South West

Across all six themes, participants were keen that the development of AI had a positive benefit for the public and that they could trust the developers of AI.

Participants expressed the importance of knowing who was behind the development of AI technology. There was concern that if AI was in the wrong hands, then it could be used negatively e.g. in war.

Additionally, it was important for participants that AI created positive changes in the community. For example, better health outcomes are derived from the potential use of AI in healthcare. While they understood the potential to reach people across society, they also highlighted concerns of doing so.

“It decreases, like, the barriers and socioeconomic statuses, so everyone gets treated fairly, really.” – Female, 25-34, North East and Cumbria

“There’s a big problem in the country with waiting lists, I realise it’s not going to do surgery for you yet, but a lot of time is spent waiting for consultants to review test results and whatever. So, I think that that’s got a huge potential to improve people’s lives, not hanging about waiting, reduced costs, people aren’t laid-off work, improve efficiency” – Male, 50-64, South East

12.5. Further trade-offs

While some participants recognised the potential benefits of providing incentives to individuals who developed AI technology, others felt the potential revenue generated within these industries outweighed the need to prioritise tax incentives over aspects (e.g. training) that would have societal benefits. This is particularly important to those who foresee individuals being left behind without access to the relevant AI training while technology continues to advance, and a minority financially benefit.

Similarly, when considering employees and the impact across various industries, some questioned whether government, industry or individuals were responsible for reskilling and upskilling individuals whose jobs may have been impacted by the use of AI in the workplace. Many were happy to accept the use of AI within their industry if they were confident there were safety mechanisms in place to support them.

The potential inequality gap of the public was raised by participants, particularly those on low incomes or who are disabled, who may not be able to afford the costs of new technologies even if they would benefit from them. Some participants questioned whether it was fair for them to be prevented from accessing products which could make their lives easier because of their incomes but resigned that if they were equipped with the appropriate skills to contend with any AI-related technology they did have access to this could alleviate their concerns.

Additionally, some queried whether the investment of a public communication scheme to educate individuals on the benefits of AI and publicise training opportunities was a worthwhile investment. Some felt this would be a way to reach underrepresented groups and engage them in any training offered by the Government while a few questioned the effectiveness and the cost but were willing to accept that it provided a benefit for the public good.

“So, it’s making sure that it’s, again, accessible… So, are we investing in, it’s not advertising, but it’s almost a way of storytelling that people will engage with?” – Non-binary, 35-49, Scotland

12.6. Conclusions

This research highlighted that participants recognised a role for stakeholders to play in supporting the general public to be equipped with the necessary skills to use AI across the next 10 years.

Table 12.1: What participants said stakeholders could do


Participants said….

Stakeholders, such as DSIT, can…

They accept AI is here to stay and are willing to embrace this new technology

Support them through the provision of education and resources across all ages and backgrounds to ensure equitability

Play a role on the international stage to represent the UK and its interest

They are uncertain about the parameters of AI

Ensure new technology is user-friendly for people of all abilities, there is an AI switch, and clear identification of AI use

Safeguard the public and industries through the regulation of AI.

Educate the public on identifying what AI is, particularly in devices and tools they commonly use

They would like greater confidence about the future based on their understanding of how AI could continue to develop, including its impact on their jobs and future generations

Include industry and the public in key decisions and publicise outcomes, ensuring there is consideration of

Provide support for future generations who are making decisions on their education and career pathways

13. Appendix

13.1. Participant pack content

Participants were provided with a participant pack which included information and stimuli prior to and during their engagement with the workshops. The glossary of terms was provided to ensure they could understand the content of the workshops and discussions. Two sets of stimuli were also provided. For each topic, a summary card was created which detailed how AI is used in that area and offered some examples. Additionally, a persona was created for each topic, which aimed to help participants visualised the impacts of using AI on other people.

13.2. Glossary of terms

This project may use language that you are unfamiliar with so we have compiled a list of words that may come up in the research. This is for reference only so that we are all using the same definitions.

Algorithm - An algorithm is a step-by-step procedure for solving a problem or performing a task, often used in computer programming to specify the sequence of operations for solving a particular problem.

Artificial intelligence - AI is a branch of computer science that enables computers and machines to simulate human intelligence and problem-solving capabilities. Artificial Intelligence (AI) generally refers to machines or software that can imitate human behaviour, such as problem-solving, learning, playing and communicating.

Big Data - Big Data refers to the large and complex datasets that cannot be easily managed or processed using traditional data processing techniques, requiring the use of advanced data analytics tools and techniques.

Chatbot - A chatbot is a software application designed to simulate human conversation, often used in messaging platforms, customer service, and support systems. You may notice these when shopping online, or as an alternative option to speaking to a customer service representative.

Large language models - Large language models (LLMs) are a type of artificial intelligence model designed to process, understand, and generate human-like text. They are trained on massive amounts of text data using self-supervised learning techniques, allowing them to capture intricate patterns and relationships within the language. Examples of LLMs include Chat-GPT and CoPilot.

Machine Learning – Machine Learning (ML) is a subset of AI that focuses on the development of algorithms and statistical models that enable machines to learn and improve from experience without being explicitly programmed.

Natural Language Processing – Natural Language Processing (NLP) is a subfield of AI that focuses on the interaction between computers and human language, enabling machines to understand, interpret, and generate human language in a meaningful way.

Predictive Text: Predictive text is a feature used in text messaging and word processing that suggests words or phrases as the user types, based on the context of the message and the user’s typing history. You may notice this when you type a message to someone, and your phone suggests the word or words you may want to write.

Skills – A skill is a specific ability or expertise that is learned or acquired through training or experience. It can be developed and improved over time. In this context, skills relate to the ability to use AI in different areas of life and could include technical skills, soft skills, or other categories.

Virtual assistant: An AI-powered software that understands and responds to voice commands and completes tasks for the user. This could include Siri or Alexa, either as a tool you can use on your phone, or as a speaker in your home.

13.2. Persona for AI in the home and personal devices

RAJIV, 32, Bachelor Rajiv is somewhat hesitant about using AI technology. He used AI primarily at home because it’s integrated into devices, but he does not feel very confident with what it does. His cautious approach to AI stems from his concerns about data privacy and general discomfort with rapidly changing technology.

Summary AI has increasingly become a part of everyday life, including in our homes These devices leverage AI to learn from out habits and makes our lives convenient. About half of the global population acknowledges that AI-based products and services have significantly changed their daily lives over the past few years indicating its widespread adoption.

Examples

  • Smart speakers, e.g. Alexa or Google Homes, which can perform tasks like playing music and setting reminders

  • Predicative text on phone

  • Smart appliances, e.g. fridges able to track grocery usage and recommend shopping lists

13.3. Persona for AI in leisure and entertainment

PAUL, 40, Parent Paul often has Friday movie nights with his partner and young children. He can find it difficult to find something to watch that will please everyone. He has found that the more shows he watches, the more the recommendations appeal to them.

Summary AI is changing the way we experience leisure and entertainment in many ways, offering many advantages, such as personalization and efficiency. One of the biggest ways AI is used is for personalized recommendations: analysis what you’ve watched in the past and suggesting new content. AI is also used to make the production of entertainment more efficient by automating tasks like video editing and post-production to save time and money.

Examples

  • Generative AI for images, script generation and other media outputs. e.g. Squibler

  • Virtual reality such as headsets

  • Media apps like Netflix or Spotify - using recommendation algorithms to suggest new shows to watch or songs to listen to

  • Video games such as The Last of Us - to control enemy behaviour and make each playthrough unique.

13.4. Persona for AI in work/career

PRIYA, 21, Graduate Priya has recently graduated with an accounting degree. She is looking for a new role and has been asked to fill out new online forms and to psychometric tests as part of the process. She is finding it a struggle based on how she was traditionally asked to do CVs and personal statements in school.

Summary AI is like having a super-efficient assistant helping you find the perfect job or the perfect candidate in a consistent and data-driven way. For employers, AI can screen resumes super-fast, schedule interviews automatically and even help make hiring decisions. For job seekers, AI can help you find the perfect job, make your resume stand out and help you practice for interviews.

Examples

  • HireVue: Using AI to analyse video interviews, considering factors such as work choice, speech patterns and facial expressions

  • Pymetrics: This company uses neuroscience games and AI to match candidates with jobs

  • Chat GPT: Using generative AI to write applications

13.5. Persona for AI in education

JOE, 16, student Joe has received a special lesson on using generative AI to learn new information. He has already used generative AI before as he is curious about technology and thinks he understands how to use it much better than his teacher.

Summary AI is making a big impact on education, making it more personalised and engaging. It’s like having a personal tutor for every student, helping them learn at their own pace and in a way that works best for them. AI has the potential to revolutionize education for teachers and students by enabling personalized learning, automating administrative tasks, and increasing accessibility.

Examples

  • Thinkster Math: This app uses AI to provide personalized math tutoring to students

  • Duolingo: This language-learning app uses AI to adapt to users’ learning styles and provide personalized lessons

  • ChatGPT: using generative AI to summarise information and research topics

13.6. Persona for AI in travel and transport

JESS, 50, Employee Jess has just started to use an AI travel assistant recommended by a friend. She travels a lot for work and has found this has helped with finding routes to avoid traffic, avoiding accidents and tolls as well as recommending locations and routes based on her travel habits

Summary AI has great potential to improve efficiency, personalisation, and safety in travel and transports. AI-powered traffic management systems optimise traffic flow by analysing real-time and adjusting traffic signals, reducing congestion. AI systems can also monitor driver behaviour, detecting signs of fatigue or distraction, and alerting drivers to potential hazards.

Examples

  • Waze: This community-driven navigation app uses AI to provide real-time traffic updates and optimal route

  • Smart roads: use technology such as sensors and cameras to improve the safety and efficiency of road traffic

  • Waymo: Google’s self-driving technology division is a leader in autonomous driving technologies

13.7. Persona for AI in healthcare

BRIAN, 75, Retired Brian has been trying to get an appointment with his doctor for a while. He has been directed to an online system where he has to answer a series of questions that determine what type of care he will receive. He is unable to speak to any medical professionals and has to rely on them reach out to him.

Summary AI has the potential to revolutionize healthcare by enabling early diagnosis, personalised treatment, and increased efficiency. AI is becoming more ingrained in our healthcare systems and offers significant benefits to patient care and provider efficiency. As technology continues to evolve, we can expect AI’s role in healthcare to grow.

Examples

  • Zebra Medical Vision: Uses AI to read medical imaging scans and detect diseases early

  • Tempus: Tempus uses AI to personalize cancer treatment based on genetic data

  • Buoy Health: This AI-powered tool asks patients a series of questions to help diagnose their symptoms.

13.8. Analysis

Due to the volume of data produced by this project analysis was ongoing, enabling us to identify emerging things and update materials throughout the research process. The fieldwork team used Mural, an online whiteboard, to capture key findings after each session. Additionally, all sessions were audio recorded for transcription, enabling us to accurately capture the views of participants. Moderators also read out any participant feedback included in the chat so that this could be included in the transcripts. A coding framework was set up in Microsoft Excel to capture the key quotes across each session. The framework was designed to capture findings across the 6 scenarios as well as the timepoints which were of interest. The coding was completed by our in-house coding team and iteratively reviewed by the project team to ensure accuracy. Additionally, once fieldwork was completed two analysis sessions were conducted – one with the core team and one with the wider fieldwork team to focus on the 2, 5, 10 year time points and what this means for DSIT.

13.9. The £100 Test

Participants were presented with a list of 10 policy areas which could receive investment to ensure the public had the AI skills required for the future. These policy areas were drafted by the research team based on analysis of the outputs from Workshops 2 and 3. The chair provided an opportunity for the participants to amend any of the suggestions or further add to the list. This provided an opportunity for participants to reflect on themes raised in other breakout groups, for the research team to sense-check findings and to ensure that priority areas were translated into tangible policy suggestions. The final list is below alongside an illustrative value for the investment required for that policy area. Participants were tasked in their breakout groups with identifying which policy areas they would invest in without spending more than £100.

Table 13.1: Table outlining the policy options for the £100 Test

Item Illustrative Investment Why invest in this? What will the impact be?
Investing in AI-driving learning solutions in schools £20 Future generations will be equipped to use AI technology in life and work
AI boot camps in community hubs for all ages £25 Provides support across a wide range of ages/locations. Provides training for people who are out of work
Introduce a fund to provide grants to businesses to develop and invest in AI training £32 To support employees in developing their technical skills and / or understanding of AI, helping them develop, deploy or use AI in their role
Expand the remit of the currently regulatory body, the AI Safety Institute £18 To monitor and evaluate the impact of AI on labour markets and people as well as AI models themselves
Tax breaks for AI developments £25 Developments for the greater good including devices for the home and driverless cars to increase accessibility
Research grants for AI-driven traffic management systems £25 Improve traffic efficiency, reduces congestion, and optimises public transportation
Roll out AI-powered diagnostic tools to ensure accuracy and fairness £28 Leads to better patient outcomes and reduced healthcare costs, reducing the time taken for diagnosis
Create a public fund for AI-generated art installations £20 Promotes public engagement with AI creativity, fostering cultural appreciation and understanding
R&D for AI in medicine £30 To aid in the diagnosis and cure of diseases. Deemed to have wider societal benefits.
New regulatory body to cover AI £18 Include representatives from across the country – cross- sector of the population
  1. https://assets.publishing.service.gov.uk/media/653bc393d10f3500139a6ac5/future-risks-of-frontier-ai-annex-a.pdf 

  2. https://neuralink.com/