Research and analysis

Public attitudes to data and AI: Tracker survey (Wave 3)

Updated 12 February 2024

1. Foreword

Advances in artificial intelligence (AI) and other data-driven technologies have huge potential for public good and have already begun to tackle some of the biggest challenges facing our society. From speeding up the diagnosis of diseases to making transport more efficient and predicting extreme weather events, AI-powered breakthroughs will improve the daily lives of people across the UK. The UK Government’s commitment to establishing the country as a leader in AI will help ensure that UK citizens continue to benefit from the transformational power of AI in the years to come.

To ensure that the benefits of AI are felt across the UK, it is essential to build justified trust in these systems. Understanding public attitudes towards AI and data-driven technologies will ensure that innovation works for everyone and does not propagate or exacerbate inequalities. The recent explosion of generative AI into public view means that it is more important than ever that we understand the public’s hopes, expectations and concerns as these technologies develop.

The Centre for Data Ethics and Innovation (CDEI) leads the Government’s work to enable trustworthy innovation using data and AI; its Public Attitudes to Data and AI (PADAI) Tracker Survey has monitored public attitudes to data-driven technology and AI since 2021. Building on Wave 2, this third iteration of the survey has a greater focus on AI to help us understand the challenges that come with the rapid advancements in the sector.

We understand that public opinions relating to data and AI are greatly nuanced and context dependent. To ensure inclusive public engagement that captures a range of voices, CDEI’s Tracker Survey interviews 4,000 members of the UK public, with an additional 200 interviews with digitally excluded adults to ensure their views are adequately reflected in the work.

The findings from the Tracker Survey, which is the first in the world of its kind, will continue to underpin the government’s approach to AI and data. We hope that, as with the two previous iterations of the survey, it will also inform work happening across civil society, academia, and industry, emphasising the importance of centring public voices in the wider AI discussion.

Viscount Camrose, Minister for Artificial Intelligence and Intellectual Property

2. Executive summary

1. The public increasingly recognises the potential societal benefits of data use but is sceptical about their equitable distribution.

The public identified the cost of living, health and the economy as the most promising domains in which data can be leveraged for societal good. These mirror what the public perceive as the greatest issues currently facing the UK, highlighting the public’s confidence in the transformative power of data. The public also recognises the opportunity for data to have individual-level benefits, with most people agreeing that data is useful for creating products and services that benefit them as individuals. However, concerns about equity persist. A third of the public disagree that all groups in society will benefit equally from data use, with digitally excluded and older people among the most concerned.

2. The public’s key concern regarding data use remains its security, though there is growing confidence that organisations will face consequences for any breaches.

The primary public concerns about data use are the potential for insecure data storage to lead to hacking or data theft, and the fear that data will be sold to other organisations for profit. These concerns may be reinforced by real-life stories reported in the media. This year’s survey found an increase in respondents recalling media narratives portraying data use in a negative light, with data breaches and leaks standing out as the most memorable incidents. Although these concerns are underscored by a feeling of limited control over personal data, there are signs that this is changing. A growing proportion of the public agrees that, when data misuse occurs, organisations are held accountable.

3. As understanding and experience of AI grows, the public is becoming more pessimistic about its impact on society.

Following the emergence of large language models into public view in late 2022, their use has become relatively widespread; a third of the public report using chatbots at least once a month in their day-to-day lives. In parallel, self-reported awareness and understanding of AI has increased across society, including among older people, people belonging to lower socio-economic grades and people with lower digital familiarity. However, alongside this increased understanding, there are ongoing anxieties linked to AI. A growing proportion of the public think that AI will have a net negative impact on society, with words such as ‘scary’, ‘worry’ and ‘unsure’ commonly used to express feelings associated with it.

4. While AI is expected to produce increased day-to-day convenience and improved public services, apprehensions remain about job displacement and human de-skilling.

The public is optimistic about the potential for AI to streamline everyday tasks, and to improve key public services including healthcare, policing, and education. Nonetheless, a spectrum of risks is recognised by the public. Most notably, there is widespread concern that AI will displace jobs, particularly among non-graduates, and that AI will erode human creativity and problem-solving skills. Recognition of several existential risks relating to AI was also widespread, likely reflecting the increased visibility of this narrative in the months leading up to survey fieldwork.

5. While preferences about whether AI is used are largely situation dependent, risk mitigation strategies can help to alleviate public concern.

When asked to choose between two potential applications of AI, the public were generally more influenced by the specific application than by any associated risks or benefits. The public is particularly positive towards the use of AI to detect cancer and to identify people in need of financial support. In contrast, the public is averse to AI being used to mark students’ homework or assess a person’s risk of failing to repay a loan. The context dependent nature of AI preferences may underlie the mixture of unease and positivity seen in other areas of the survey. Encouragingly, the survey also shows that clear and effective risk mitigation strategies can reduce the impact of risks on the public’s appetite for AI use.

6. Those with very low digital familiarity feel less in control of their data, but increasingly recognise the benefits of data use and trust accountability for data misuse.

Members of the public with very low digital familiarity tend to feel they have less control over their data compared with the broader UK population, though this gap has been closing over time. Within this group, there’s increasing confidence that organisations are being held accountable for data misuse, and a growing acknowledgement of the constructive impacts that data can have at both a societal and individual level. However, despite these shifts, those with very low digital familiarity remain relatively pessimistic, with the majority anticipating a negative or neutral impact of AI on society.

3. Introduction

The Centre for Data Ethics and Innovation (CDEI)’s Public Attitudes to Data and AI (PADAI) Tracker Survey monitors public attitudes towards data-driven technologies, including artificial intelligence (AI), over time. This report summarises findings from the third wave (Wave 3) of research and identifies how public attitudes have changed since the previous waves (Wave 1 and Wave 2). The research was conducted by Savanta on behalf of the CDEI.

The research uses a mixed-mode data collection approach comprising online interviews (Computer Assisted Web Interviews - CAWI) and a smaller telephone survey (Computer Assisted Telephone Interviews - CATI) to ensure that those with very low digital familiarity are represented in the data. 

Key information on each survey wave, including survey mode, respondent profile, sample size and fieldwork dates can be found in Table 1. Full details of the methodology, including notes on interpreting the data in this report, are provided in the Methodology section.

Table 1: Overview of survey Waves 1, 2, and 3 

Wave 1 (Dec 2021/Jan 2022) Wave 1 (Dec 2021/Jan 2022) Wave 2 (Jun/Jul 2022) Wave 2 (Jun/Jul 2022) Wave 3 (Aug/Sept 2023) Wave 3 (Aug/Sept 2023)
  CAWI (Online sample) CATI (Telephone sample) CAWI (Online sample) CATI (Telephone sample) CAWI (Online sample) CATI (Telephone sample)
Respondents Demographically representative sample of UK adults (18+) ‘Very low digital familiarity’ Demographically representative sample of UK adults (18+) ‘Very low digital familiarity’ Demographically representative sample of UK adults (18+) ‘Very low digital familiarity’
Number of interviews 4,250 200 4,320 200 4,225 209
Fieldwork dates 29 November to 20 December 2021 15 December to 14 January 2022 27 June to 18 July 2022 1 to 20 July 2022 11 to 23 August 2023 15 August to 7 September 2023

4. The value of data to society

4.1 Summary

The value the public places on data collection and analysis has increased in the last year; with 57% of the public recognising the personal benefits of data and 44% acknowledging its societal value. There is alignment between the public’s perception of the key areas where data can contribute to societal good - health, cost of living, and the economy - and public perceptions of the greatest issues facing the UK, demonstrating the public’s belief in the power of data. However, there is still scepticism around the equal distribution of these benefits across society, with only 33% in agreement that all groups reap the benefits equally.

4.2 The value of data use to society

Respondents were presented with a series of statements relating to the value of data use and asked to indicate the extent to which they agreed or disagreed with each. As shown in Figure 1, the public are more likely to recognise the individual-level benefits of data than the societal-level benefits. However, over the last year, there has been a positive shift in the public’s view of the value of data at both a societal and individual level.

The majority of the public (57%) agrees that data is useful for creating products and services that benefit them as individuals (increased from 53% at Wave 2 and 51% at Wave 1)[footnote 1], with only 13% disagreeing. A smaller proportion of the public (44%) agrees that data collection and analysis is good for society, with 33% remaining neutral on this matter. However, this still represents a positive shift.

Figure 1: Attitudes towards the statements “Data is useful for creating products and services that benefit me” and “Collecting and analysing data is good for society” over time

Q12. Summary Table Net Agree: Please indicate how much you agree or disagree with each of the following statements? BASE: All online respondents: November/December 2021 (Wave 1), n=4,250, June/July 2022 (Wave 2), n=4,320, August 2023 (Wave 3), n=4,225

Those with high digital familiarity[footnote 2] are more likely than those with low digital familiarity to agree that data is useful for creating products and services that benefit them (66% and 37% respectively) and that data collection and analysis is good for society (51% and 32% respectively).

Moreover, regression analysis revealed that younger people and those from higher socio-economic grades (ABC1) were more likely to recognise both the individual-level and societal-level benefits of data use than older people and those in lower socio-economic grades (C2DE) respectively (Annex, Models 1 and 2).

To provide further insight into how data use can provide societal-level value, respondents were asked where they saw the greatest opportunity for data use to benefit the public. Health (21%), the cost of living (18%), and the economy (8%) emerged as areas where the public sees the most promise for data-driven improvements. As demonstrated in Figure 2, these perceived opportunities coincide with what the public perceives as the most important issues facing the country. This finding highlights the public’s confidence in the transformative power of data. Despite this, the public sees limited potential for data to improve other tested areas like crime, housing, education, and inequality.

Figure 2: The most important issues facing the country and opportunities for data use, August 2023 (Showing any issues for the UK selected by at least 5% of respondents; axes split along median value)

Q15b. Which of the following do you think are the most important issues facing the country for you personally at this time? and Q16. In which of these issues, if any, do you think the use of data presents the greatest opportunity for making improvements that benefit the public in this country? BASE: All online respondents: August 2023 (Wave 3), n=4,225

4.3 The equity of benefits of data use

Despite the increased public recognition of the benefits of data at both individual and societal levels, the public expresses doubt that these advantages are felt equally across society. As shown in Figure 3, only a third (33%) agree that all groups in society benefit equally from data use, with a similar proportion (32%) disagreeing with this statement. There has been an increase in the proportion of participants that agree with this statement since Wave 2 (26%).

Figure 3: Attitudes towards the statement “All groups in society benefit equally from data use” over time

Q12. Summary Table Net Agree: Please indicate how much you agree or disagree with each of the following statements? BASE: All online respondents: November/December 2021 (Wave 1), n=4,250, June/July 2022 (Wave 2), n=4,320, August 2023 (Wave 3), n=4,225

There is a clear positive relationship between digital familiarity and perceptions of equity. People with high digital familiarity are the most optimistic about the equal distribution of data benefits across society; 38% agree that all groups will benefit equally. In comparison, those with medium (28%) or low (25%) digital familiarity are substantially less likely to agree. Concerns are especially pronounced among those with very low digital familiarity (who were interviewed via the telephone survey), of whom 46% disagree that all groups in society benefit equally from data use, compared with 32% of the public overall.

4.4 Trust in data actors to benefit society

While the public increasingly recognises the benefits of data use for society, trust in data actors to achieve this societal benefit varies and remains strongly related to trust in those actors to use data effectively, safely, transparently and with accountability (see Table 2). The NHS continues to inspire the highest public trust to use data to benefit society (75%), followed by university researchers (63%) and pharmaceutical companies (62%). More modest levels of trust are seen with respect to the government and large technology companies (41% each), whereas social media companies inspire the lowest levels of trust (28%).

No clear pattern emerges to suggest a difference between levels of trust in the public and private sectors to use data to benefit society. Researchers in universities and pharmaceutical companies are trusted to a similar extent (63% and 62% respectively), as are the government and big technology companies (41% each). Trust in actors to use data to benefit society has remained relatively unchanged since Wave 2, except for regulators (51% in Wave 2 compared with 56% in Wave 3), in which trust has increased, and university researchers (66% in Wave 2 compared with 63% in Wave 3), in which trust has decreased. It is important to note that, to aid respondent understanding, the Wave 3 question wording was changed to include examples of specific regulators. This may have contributed to the observed increase in trust levels.

Table 2: Trust in organisations, and in their actions with data (Showing % Sum: Trust)

Q14. To what extent, if at all, do you trust the [organisation] to…? BASE: Approximately half of all online respondents per organisation shown, August 2023 (Wave 3), n = 2,059 – 2,171

5. The risks of data use to society

5.1 Summary

The greatest public concern about data use remains its security, with a particular focus on the potential for insecure storage to lead to hacking or theft, and apprehensions about data being sold for profit. These concerns are possibly influenced by news stories in the media, with an observed increase in respondents recalling stories portraying data use negatively, and with data breaches and leaks being particularly memorable themes. Despite these concerns and a prevailing sentiment of limited control over personal data, the public increasingly believe that organisations are held to account when they misuse data, indicating a growing confidence in the accountability mechanisms in place.

5.2 Concerns regarding the use of data in society

Respondents were asked what they perceived as the greatest risks of data use in society; results are provided in Figure 4. The most frequently selected risks are insecure data storage leading to possible hacking or theft (57%), and data being sold to other organisations for profit (55%).

Concerns about data risks vary across the population. Older individuals, particularly those aged 55 and above, are more likely to express worry about data not being held securely (69%) and being sold to other organisations for profit (64%), compared with 35-54-year-olds (56% and 54% respectively), and 18-34-year-olds (42% and 44%). A larger proportion of those with high (60%) or medium (57%) digital familiarity are concerned about data security than those with low digital familiarity (46%). In addition, more women (59%) are worried about data security and hacking than men (56%), but a higher proportion of men (37%) consider data being used for surveillance purposes a risk than women (28%). 

Figure 4: Greatest risks for data use in society (Showing % selected each option)

Q17b. Which of the following do you think represent the greatest risks for data use in society? BASE: All online respondents: August 2023 (Wave 3), n=4,225

The high level of concern regarding data security is underpinned by a feeling among two in five members of the public (40%) that they lack control over who uses their data and how. Although public confidence that individuals have control over their data is divided (35% agree while 40% disagree), agreement that individuals have control has increased since previous waves (from 29% in Wave 2 and 33% in Wave 1). 

Figure 5: Attitudes towards the statement “I have control over who uses my data and how” over time

Q12. Summary Table Net Agree: Please indicate how much you agree or disagree with each of the following statements? BASE: All online respondents: November/December 2021 (Wave 1), n=4,250, June/July 2022 (Wave 2), n=4,320, August 2023 (Wave 3), n=4,225

5.3 Public recall of data usage media stories

Respondents were asked about their exposure to news stories regarding data use in the last six months, through articles, TV, or radio. Around half of the public (48%) reported having seen a news story about data, an increase from 40% in Wave 2 and 37% in Wave 1. However, as the salience of stories about data has increased, there has also been a shift towards a more negative sentiment in the narratives recalled by respondents. As shown in Figure 6, the majority (65%) of those who recalled news stories reported that data use was portrayed negatively, an increase since Wave 2 (53%) and Wave 1 (37%). This suggests that the risks of data use are becoming increasingly prominent in the public’s consciousness, and that the media may contribute to these negative associations.

Figure 6: The recalled presentation of data in news stories over time (Showing % selected each option)

Q11. Overall, do you think this news story presented the way data was being used positively or negatively? Base: All respondents who have read, seen or heard a story about data being used recently, and could remember the story: November/December 2021 (Wave 1), n = 1,499, June/July 2022 (Wave 2), n = 1,678, August 2023 (Wave 3),n = 1,303

Among respondents who could recall the subject of a relevant media story, data breaches or leaks (36%) and misuse of data by the government, companies, or individuals (10%) were the subjects mentioned most frequently (see Figure 7). These themes closely resemble the most frequently selected fears relating to data use (see Figure 4). Frequently mentioned negative media stories included the Police Service of Northern Ireland data breach, and the Electoral Commission data breach. Both stories were widely reported in August 2023 while Wave 3 fieldwork was being conducted. Where positive news stories were recalled by respondents, these often emphasised the role of data in medicine and research. For example, respondents recalled stories about the ‘NHS using patient data to help medical research’ and ‘Collecting data within the NHS about a person to get a better picture of what’s wrong with them’. These responses often mention words such as “protection” and “analysis”, and organisations such as the Google, NHS, and legal institutions.

Figure 7: Public recall of data-related news stories (Showing all themes mentioned by those who recall seeing a news story about data in the past six months)

Q10. In a couple of sentences, please could you briefly tell us what the story you saw about data was about? Base: All CAWI respondents who say they have seen a news story about data in the last 6 months: August 2023 (Wave 3), n=2014 (coded responses to an open text question).

5.4 Perception of accountability of data actors

Despite growing concerns about data misuse and a perceived lack of control over the use of personal data, a growing proportion of the public agree that data actors are being held to account. Just under half of the public (45%) agrees that organisations that misuse data are held accountable, an increase from 41% in Wave 2, but just over a third (35%) still disagrees.

Younger respondents (50% of 18-34s and 48% of 35-54s) and those from lower socio-economic backgrounds (C2DE, 46%) are more likely to agree that organisations are held to account in the instance of data misuse, compared with those aged 55+ (38%) and respondents from higher socio-economic grades (ABC1, 43%).

Figure 8: Attitudes towards the statement “When organisations misuse data, they are held accountable” over time

Q12. Summary Table Net Agree: Please indicate how much you agree or disagree with each of the following statements? BASE: All online respondents: June/July 2022 (Wave 2) n=4,320, August 2023 (Wave 3), n=4,225

6. Attitudes towards AI

6.1 Summary

The use of large language models is relatively widespread, with a third (34%) of the UK public using chatbots on at least a monthly basis in their personal life and a quarter (24%) doing so for work. Reflecting this, self-reported awareness and understanding of AI among the UK public has increased since last year across most groups in society, including older individuals, those belonging to lower socio-economic grades, and those with lower digital familiarity. The vast majority of the public (95%) report having heard of AI and a considerable proportion (66%) report being able to give at least a partial explanation of what AI is. Alongside the growing understanding and use of AI, there are ongoing anxieties linked to the technology, with ‘scary’, ‘worry’, and ‘unsure’ being the most common feelings expressed about it.

6.2 Use of large language models

To understand the behavioural uptake of AI by the public, respondents were provided with a short explanation of large language models (referred to in the survey as ‘chatbots’) before being asked how frequently they had used them in the previous three months. A third of the public (34%) report having used chatbots at least once a month in their day-to-day lives and a quarter (24%) report doing so for work purposes. However, there remains a large proportion of the population that has not used chatbots for either personal (44%) or professional (64%) purposes in the previous quarter.

Regression analysis was used to identify the demographic dimensions that predict the behavioural uptake of chatbots. Female respondents, older respondents, and non-graduates were less likely to report frequent usage of chatbots in their personal life compared with male respondents, younger respondents, and graduates respectively. Less frequent usage was also identified among respondents who did not trust large technology companies (Annex, Models 8-9).

6.3 Awareness and understanding of AI

Self-reported awareness and understanding of AI[footnote 3] is very high, having increased substantially over the last year. As shown in Figure 9, the vast majority of the public have now heard of AI (95% compared with 89% in Wave 2). Furthermore, two in three respondents (66%) now report that they could provide at least a partial explanation of what AI is, an increase from 56% in Wave 2.

The trend of increased awareness and understanding of AI is apparent across nearly all demographic groups, including those with lower baseline levels of awareness. For example, although those from lower socio-economic grades are less likely to be aware of AI (C2DE, 93%) than those from higher socio-economic grades (ABC1, 96%), both those from lower and higher socio-economic grades have seen increases in awareness over the last year (by four percentage points and eight percentage points respectively).

While there appears to be an overall increase in awareness and understanding of AI, disparities still remain. For example, those with low digital familiarity are less likely to report being aware of AI than respondents with high or medium digital familiarity (89% compared with 96% and 95% respectively). Similarly, older respondents are less likely than younger respondents to report that they could explain, at least partially, what AI is (74% of those aged 18–34, compared with 66% of those aged 35-54 and 62% of those aged 55+).

Figure 9: Awareness of AI over time (Showing % selected each option)

Q21. Have you ever heard of the term Artificial Intelligence (AI)? BASE: All online respondents: June/July 2022 (Wave 2) n=4320, August 2023 (Wave 3), n=4225

6.4 Sentiment towards AI

Despite the notable increase in understanding of AI, the public’s anxieties associated with AI were brought to the surface when respondents were asked to enter a single word that best captured their feelings about AI. Negative and neutral associations with AI are far more prevalent than positive associations, illustrating the level of concern felt towards AI. ‘Scary’ was by far the most common term provided (n=423), followed by ‘worry’ (n=240). The third most common association expressed a lack of knowledge or feeling of being ‘unsure’ (n=209). The most common unambiguously positive terms provided were ‘excited’ and ‘good’ (both n=53). Responses to this question have been visualised in Figure 10.

Figure 10: Word cloud of public sentiment towards AI by UK adults, Wave 3 (visualising the top 50 most often mentioned words)

Q22. Please type in one word that best represents how you feel about ‘Artificial Intelligence’. Base: All online respondents to say they have heard of AI in August 2023 (Wave 3) and to leave a valid response, n=3453

To an extent, results from Wave 2 and Wave 3 are similar, with ‘scary’ and its synonyms being the most commonly cited word in both waves, while ‘worry’ and ‘unsure’ also continue to be common choices. The principal difference is that in Wave 2 ‘scary’, while the most commonly chosen word, was most prevalent by a margin of less than 100. In Wave 3, ‘scary’ is the most prevalent word by a margin of over 180. In addition, references to the word ‘robot’ are less frequent in Wave 3 compared with Wave 2 (ranked 2nd in Wave 2 compared with 5th in Wave 3).

Figure 11: Word cloud of public sentiment towards AI by UK adults, Wave 2 (visualising the top 50 most often mentioned words)

Q22. Please type in one word that best represents how you feel about ‘Artificial Intelligence’. Base: All online respondents to say they have heard of AI in June/July 2022 (Wave 2) and to leave a valid response, n=3132

Analysis demonstrates that those with positive expectations about the impact of AI show openness to AI’s potential, while those with negative expectations about the impact of AI mainly associate AI with fear. We compared the frequency of the 50 most common terms written by those who believe AI will either positively or negatively impact society[footnote 4]. What best distinguishes the two groups is not the presence or absence of fear and worry, but the variety of attitudes held by each group. While those who think that AI will have an overall positive impact on society frequently wrote ‘scary’ (n=53), this appeared in relatively equal measure alongside positive words such as ‘interest’ (n=42), ‘future’ (n=37) and ‘excited’ (n=31). In contrast, fear dominates the responses of those who suggest that AI will have an overall negative impact on society. More than twice the number of people selected ‘scary’ (n=140) as did the next most common term (worry, n=62).

7. Perceived impact of AI

7.1 Summary

While the majority of the public has a neutral view of AI’s likely impact on society, levels of pessimism have increased since last year. The public largely expects AI to have a positive impact on streamlining everyday tasks and enhancing key public services such as healthcare, policing, and education. However, the perceived impact of AI on the labour market and how fairly people are treated is more contentious, with the most common concerns being job displacement and de-skilling leading to diminished human creativity and problem-solving skills.

7.2 The impact of AI on society

The public has mixed attitudes towards AI’s future impact on society, with relatively small proportions adopting extreme positive or negative viewpoints. Figure 12 details how respondents rated the overall impact that AI will have on society on a scale from 0 (very negative) to 10 (very positive). Over half (58%) expect the societal impact of AI to be neutral (score 4-7), while 14% predict a positive impact (score 8-10) and a quarter (25%) predict a negative impact (score 0-3). The proportion of those who expect AI to have a negative impact on society has increased by five percentage points (from 20%) since Wave 2.

Figure 12: The perceived impact of AI on society over time

Q23b. On a scale from 0-10 where 0 = very negative impact and 10 = very positive impact, based on your current knowledge and understanding, what impact do you think Artificial Intelligence (AI) will have overall on society? BASE: All online respondents: June/July 2022 (Wave 2), n=3,838, August 2023 (Wave 3), n=4,008

Despite the relative neutrality of opinions concerning the future impact of AI on society, differences between groups do emerge. Results from regression analysis indicate that female respondents, non-graduates, and respondents who are unable to explain AI are less likely to predict a positive impact of AI compared with male respondents, graduates, and respondents who felt able to provide some explanation of AI (Annex, Model 10).

7.3 Expected positive and negative impacts of AI

When asked about the degree to which they expect the introduction of AI to yield positive or negative outcomes across different situations, respondents reveal diverse expectations (see Figure 13). Just over half think that AI will positively influence the ease of day-to-day tasks (52%) and healthcare for themselves and their family (51%). A substantial proportion think AI will have a positive impact on crime prevention and detection (44%) and education (39%). However, more negative views are expressed when it comes to wider societal impacts. More than two in five (43%) foresee a negative influence of AI on job opportunities for people like them, with only around a quarter anticipating either a positive (22%) or neutral (27%) impact. In addition, 31% of UK adults say AI will have a negative impact on how fairly people are treated in society, compared with 22% expecting a positive impact and 38% foreseeing a neutral impact.

Figure 13: Expected impact of AI across different situations

Q25b. Summary Table: To what extent do you think the use of Artificial Intelligence will have a positive or negative impact for the following types of situations? BASE: All online respondents: August 2023 (Wave 3), n=4,225

7.4 Anticipated risks from the use of AI

When asked about risks AI poses to society, the public do not indicate a single dominant concern, but a variety of potential risks are identified by relatively high proportions of respondents (see Figure 14). Concerns about the impact of AI on the labour market are clear; job displacement due to AI (45%) and the potential loss of human creativity and problem-solving skills (35%) are the two most frequently selected risks. Existential risks, including humans losing control over AI (34%) and AI being used for cyber-crime and terrorism (23%) are also prevalent. In contrast, smaller proportions of the public express concern about potential near-term risks, including AI making decisions that humans can’t understand or explain (23%), and AI being biased and leading to unfair outcomes (14%).

There are clear demographic patterns that emerge in the perception of risk. For example, non-graduates (48%) are more likely than their graduate counterparts (41%) to view job displacement as one of the greatest risks of AI. Younger people, aged 18-34, are more likely to stress the risk that AI will negatively impact people’s mental health and wellbeing (16% compared with 10% of those aged 55+) and that biased AI will lead to unfair outcomes (16% compared with 13% of those aged 55+).

Figure 14: The greatest risks from using AI in society (Showing % selected each option)

Q39. Some people have suggested that there may be risks to society from using Artificial Intelligence (AI). Which of the following, if any, do you think represent the greatest risks from the use of AI in society? Please choose up to three. BASE: All online respondents: August 2023 (Wave 3), n=4,225

7.5 Need for AI governance

Respondents were asked to select up to three sectors in which they think it is crucial for governments to carefully regulate AI to avoid negative outcomes for users. Healthcare (29%), the military (27%), and banks and finance (25%) are the most frequently selected sectors. Compared with Wave 2, the proportion of respondents that think AI in healthcare needs careful regulation has decreased (down 4 percentage points from 31% in Wave 2), while the proportion that selects banks and finance has increased (up three percentage points from 22% in Wave 2).

Digital familiarity also influences the choices of the public. Those with high digital familiarity are more likely to indicate that self-driving cars (22% compared to 19% with medium and 11% with low familiarity), education (17% compared to 14% and 12%), and hiring and recruitment (14% compared to 10% and 9%) need careful regulation by governments. 

As shown in Figure 15, the sectors in which regulation is most valued align with the sectors in which the public foresee AI having the greatest positive impact. Together, these findings suggest that, even in sectors where the public is optimistic about the impact of AI, there remains an awareness of the potential risks. In these cases, the public think governance of AI is necessary to realise the benefits, while mitigating risks.

Figure 15: The most important areas in AI for the government to carefully manage by areas where AI will have the biggest positive impact over the next decade (axes split along median value)

Q32. SUMMARY TOP 3: Which of the following areas do you think is important that governments carefully manage to make sure the use of AI does not lead to negative outcomes for users? Q33b. SUMMARY TOP 3: In which of the following areas, if any, do you think the use of Artificial Intelligence (AI) will have the biggest positive impact in society over the next 10 years? BASE: All online respondents: August 2023 (Wave 3), n=4,225

8. Preferences for how AI is used

8.1 Choice-based experiment (Conjoint)

To study people’s preferences for how AI is used, we incorporated a conjoint experiment within the online survey. What follows is a brief description of the conjoint design and its outputs, to aid understanding of the results that follow. For a full description of the method, please see the Methodology section.

Respondents were presented with five pairs of scenarios and asked to pick the one they preferred. The scenarios differed from one another in terms of the features of the AI presented. The results of the experiment show which of these features influenced respondents to pick one AI scenario over another. A full list of all features can be found in the Annex. An example of how pairs of scenarios were presented to respondents is illustrated in Figure 16.

Figure 16: Example of a possible scenario pairing, as presented to respondents in the conjoint experiment

The features of the AI presented were grouped into the following overarching categories, which we call attributes:

  1. Attribute 1: AI application. This refers to what the AI would be used to do. An example feature in this attribute group is ‘Identify people who need financial support’ (see Scenario 1 in Figure 14).
  2. Attribute 2: The benefit of using the AI, relative to other methods. An example feature in this attribute group is ‘More accurate than alternative approaches’ (see Scenario 1 in Figure 14).
  3. Attribute 3: The risk associated with the use of AI and the associated governance mechanism. An example of a feature in this attribute group is ‘people won’t know whether AI is being used’ and ‘it is made clear when AI is being used’ (see Scenario 1 in Figure 14).

By analysing which scenarios were chosen by respondents across all pairings, we can produce two key indicators. First, we can assess the relative importance of each attribute category as a driver of the general public’s AI preferences. Second, we can see whether individual AI features make respondents more or less likely to select an AI scenario.

In addition, we can assess the impact of risk mitigation strategies on public opinion. Half of respondents saw risk features presented without a governance mechanism (Model One), and the other half of respondents saw both a risk and a governance mechanism designed to mitigate that risk (Model Two), as shown in Figure 14. By comparing the results of Model One with Model Two, we can gauge the impact these governance mechanisms have on the public’s preferences.

8.2 Attribute analysis

When respondents express a preference for one AI scenario over another, they mostly do so based on the specific application of the AI. The AI application is responsible for 60.9% of respondent decision-making in Model One, and 63.5% in Model Two. This is more than double the influence over respondent decision-making than can be attributed to risks (24.1% in Model One and 23.8% in Model Two), and more than triple the influence of benefits (14.9% in Model One, and 12.7% in Model Two). The public appear to be much more influenced by what the AI is used for, than they are by associated risks or benefits.

The figures presented in Table 3 denote the raw importance of each attribute group when deciding which AI scenario they prefer. However, these scores represent the whole impact of the attribute group, and do not distinguish whether that impact is positive or negative. This information is provided at a more granular level, however, by analysis of the individual features within each attribute category.

Table 3: Overview of results for three attributes of AI by Model

Model 1 Model 2
Application 60.9% 63.5%
Benefits 14.9% 12.7%
Risks 24.1% 23.8%

CONJOINTQ. Which of these scenarios is your preference for how AI is used? BASE: All online respondents August 2023 (Wave 3) who saw each Model, Model 1 n=2,119, Model 2 n=2,106

8.3 Impact of application on preferences for use of AI

Among the specific AI applications tested, AI being used to detect cancer from an x-ray image had the greatest positive impact on respondent preferences. This echoes other findings across the survey. For example, over half of respondents report that the use of AI in healthcare will have a positive impact (51%), and the public are most likely to select healthcare as the use of AI with the biggest positive impact in society over the next 10 years (34%). Using AI to identify people who need financial support to pay their energy bills also has a positive impact on preferences, though this impact is less strong than the application of AI to detect cancer.

The application of AI to mark students’ homework and to assess the risk of an individual failing to repay a loan have a particularly negative impacts on preferences. The strength of this negative impact could be attributed to the concerns about fairness seen elsewhere in the survey. For example, three in ten (31%) people see the use of AI as having a negative impact for how fairly people are treated in society.

8.4 Impact of benefits on preferences for use of AI

The public appear relatively unmoved by any particular promise that AI may be better than alternative approaches. The tested benefits do differ in whether they have a positive or negative impact. However, the strength of their impact, positive or negative, is so consistently weak that they cannot be viewed as having much weight.

One potential explanation for why this might be the case is the formulation ‘This approach could be … than alternative approaches’ (e.g., more accurate, faster, etc.). It may be that people already assume that AI will be better or worse than alternative approaches. For example, it is unlikely that the use case ‘detecting the presence of cancer’ would be viewed so positively, if people did not already assume that AI would be more accurate or faster (or at least as accurate or fast) as a human doctor when executing this task. If this is the case, listing specific benefits fails to have an impact on peoples’ decisions, because people already assume the benefits mentioned are present.

8.5 Impact of risks on preferences for use of AI

Much like AI applications, risk features have a complex impact on respondents’ preferences. When presented on their own (as in Model One) some risks have a negative impact. Other risks have little or no impact, and some have a slightly positive impact. Regardless of whether they influence preferences positively or negatively however, when paired with a governance mechanism (in Model Two), most risks have less of an influence on respondents’ decision-making.

In Model One, the risk with the greatest negative impact on peoples’ preferences for AI use is the risk that peoples’ personal information could be stolen, with older people and female respondents being slightly more averse to this risk than younger people and male respondents respectively. This concern is reflected in other areas of the survey; 57% of the public express concerns that data will not be held securely and could be hacked or stolen.

The risk of bias against certain groups also has a slight negative impact, with that impact being slightly stronger among female respondents compared with male respondents. Meanwhile, the risks that people won’t know whether AI is being used, and that it will be difficult to understand how the technology makes decisions, have a slight positive impact on peoples’ likelihood of picking a scenario.

As stated, the impact of risks (whether positive or negative) on preference for AI scenarios is reduced in Model Two, in which the risk is presented alongside a governance mechanism. The changes in the impact of each risk from Model One to Model Two are charted in Figure 17. For example, in Model One, the risk ‘people’s personal information could be stolen’ has a moderate negative impact. In Model Two, where this risk is combined with assurance that steps will be taken to ensure that personal information is safe and secure, the feature’s negative impact reduces.

The sole exception to the governance mechanism reducing the impact of the risk can be seen in the risk/governance mechanism pairing ‘It will be difficult to know who is responsible if a mistake happens/a human is always responsible for the decisions made’. In this instance, adding the governance mechanism results in the feature going from having no impact to having a slightly positive impact on peoples’ preferences.

Figure 17: Impact of risk and an associated governance mechanism on preferences for use of AI by Model (Showing % change in proportion of times the option is selected)

CONJOINTQ. Which of these scenarios is your preference for how AI is used? BASE: All online respondents August 2023 (Wave 3) who saw each Model, Model 1 n=2119, Model 2 n=2106

9. Attitudes of those with very low digital familiarity

9.1 Introduction and summary

In addition to a representative sample of UK adults, the tracker survey engages with individuals who have very low digital familiarity through telephone interviews. Interviews with those with very low digital familiarity enable us to ensure that the attitudes and experiences across the full spectrum of digital engagement are captured.

We define people as having very low digital familiarity if they either agreed with three out of the five statements below, or said that they did not perform the activities in question:

  • I don’t tend to use email
  • I don’t feel comfortable doing tasks such as online banking
  • I feel more comfortable shopping in person than online
  • I find using online devices such as smartphones difficult
  • I usually get help from family and friends when it comes to the internet

Further information regarding the design of the very low digital familiarity sample can be found in the Methodology section.

In summary, those with very low digital familiarity perceive they have less control over their personal data compared with the UK population overall, although this gap is gradually narrowing over time. This group increasingly believes organisations are held to account for data misuse and they recognise the positive impact data can have on society and personal benefits. Despite increased familiarity with AI, the majority of individuals with very low digital familiarity maintain a sceptical outlook on AI’s impact on society, largely expecting a negative or neutral outcome.

9.2 Perceptions of data control and accountability

Those with very low digital familiarity feel less control over their data and are more likely to reject the notion that data use benefits all societal groups equally compared with the overall UK adult population. Approaching half of those with very low digital familiarity disagree they have control over who uses their data and how (49%), and that all groups in society benefit equally from data use (46%). Both figures are higher than the overall UK population’s views (40% and 32% respectively). In addition, about half of those with very low digital familiarity agree that data collection and analysis is good for society (51%) and that the use of data for creating products or services that benefit them (47%).

Some interesting changes are seen over time. The share of those with very low digital familiarity who agree they have control over who uses their data and how has increased since Wave 2 (from 19% in Wave 2 to 35% in Wave 3). Furthermore, this group increasingly agrees that organisations are held accountable for misuse of data (from 38% in Wave 2 to 54% in Wave 3).

9.3 Awareness and understanding of AI

Familiarity with AI has risen among those with very low digital familiarity, as with the overall UK population. Just over three quarters (76%) of adults with very low digital familiarity have heard of AI in Wave 3, marking a significant increase since Wave 2 (64%). A third (33%) of those with very low digital familiarity could explain at least partially what AI is, but almost a quarter (24%) have never heard of it. This demonstrates that, despite the increased levels of familiarity with AI among those with very low digital familiarity, there is still a lack of awareness relative to the general population (in which 95% have heard of AI and 5% are unaware).

Despite the overall increase in awareness of AI, only a small proportion of adults with very low digital familiarity think AI will have a positive impact on society. Most of those who are aware of AI associate this technology with either a negative (38%) or neutral (42%) societal impact. Perceptions of negative societal impact of AI have risen steeply from 3% in Wave 2 to 38% in Wave 3, while the proportion viewing AI’s impact as neutral has decreased (42% in Wave 3 compared with 68% in Wave 2). 

10. Methodology

The CDEI’s Public Attitudes to Data and AI Tracker Survey monitors public attitudes towards data and AI over time. This report summarises the third wave (Wave 3) of research and makes comparisons with the first and second wave (Wave 1 and Wave 2).

The research uses a mixed-mode data collection approach comprising online interviews (Computer Assisted Web Interviews - CAWI) and a smaller telephone survey (Computer Assisted Telephone Interviews - CATI) to ensure that those low or no digital skills are represented in the data.

The Wave 1 CAWI survey ran among the general UK adult population (18+) from 29 November 2021 to 20 December 2021 with a total of 4,250 interviews collected in that time frame. A further 200 CATI interviews with the ‘very low digital familiarity’ sample were conducted between 15 December 2021 and 14 January 2022.

The Wave 2 CAWI survey ran among the general UK adult population (18+) from 27 June 2022 to 18 July 2022 with a total of 4,320 interviews collected in that time frame. A further 200 CATI interviews with the ‘very low digital familiarity’ sample were conducted between 1 and 20 July 2022.

The Wave 3 CAWI survey ran among the general UK adult population (18+) from 11 to 23 August 2023 with a total of 4,225 interviews collected in that time frame. A further 200 CATI interviews with the ‘very low digital familiarity’ sample were conducted between 15 August and 7 September 2023.

Please note that there was a six-month interval between Wave 1 and Wave 2, but a 12-month interval between Wave 2 and Wave 3. Therefore, this report concentrates on the differences between the data from Wave 2 and Wave 3.

We welcome any further feedback or questions on our approach at public-attitudes@cdei.gov.uk.

10.1 Sampling and Weighting

Representative Online (CAWI) Sample

Quotas have been applied to the online sample to ensure that it is representative of the UK adult population, based on age, gender, socio-economic grade, ethnicity, and region. In addition, interlocked quotas on age and ethnicity were used during fieldwork to monitor the spread of age across ethnic groups and ensure a balanced final sample. The online sample was provided by Cint. All the contact data provided is EU General Data Protection Regulation (GDPR) compliant.

The online sample was weighted based on official statistics concerning age, gender, ethnicity, region, and socio-economic grade in the UK to correct any imbalances between the survey sample and the population to ensure it is nationally representative. Random Iterative Method (RIM) weighting was used to ensure that the final weighted sample matches the actual population profile.

Where possible, the most up to date ONS UK population estimates have been used for both the fieldwork quotas and weighting scheme to ensure a nationally representative sample. 2021 mid-year population estimates were used for age, gender, and region, and the 2011 Census data for socio-economic groups (SEG). 2011 census figures were used for all countries as fieldwork began before the ONS released SEG figures for England and Wales that used the 2021 census. For ethnicity, we combined information from the 2021 Census data available for England, Wales, and Northern Ireland and the 2011 Census for Scotland.

The online sample weighting used in Wave 3 was updated to reflect the most up to date population estimates and therefore differs slightly from the weighting scheme used in Wave 1 and Wave 2.

Very low digital familiarity (CATI) Sample

For Wave 3, 209 respondents with very low digital familiarity were contacted and interviewed via telephone. The named sample list of respondents’ contact details was provided by Datascope. All the contact data provided is GDPR compliant.

This telephone sample captures the views of those who have low to no digital skills and are, therefore, likely to be excluded from online surveys. They are likely to be affected by digital issues in different ways to other groups. As the answers respondents give to questions may be impacted by how the question was delivered (e.g., by whether they saw it on a screen, or had it read to them over the phone), any comparisons drawn between the CATI and CAWI samples should be treated with caution.

To select those with very low digital familiarity, we asked the below screening question in the Wave 3 telephone survey questionnaire. Respondents needed to agree to the statements or say they do not do the activities for three out of the five statements asked to qualify for the telephone interview.

Statements

  • I don’t tend to use email
  • I don’t feel comfortable doing tasks such as online banking
  • I feel more comfortable shopping in person than online
  • I find using online devices such as smartphones difficult
  • I usually get help from family and friends when it comes to using the internet

The sample of those with very low digital familiarity was subject to fieldwork quotas and weighted to be representative of the digitally excluded population captured in FCA Financial Lives 2022 survey. The FCA Financial Lives 2022 survey was chosen as the basis to weight data rather than other similar datasets, including ONS Data and Lloyds Digital Skills 2022 report, as the FCA Financial Lives survey includes ethnicity breakdowns in data tables. We are aware that this group is slightly skewed towards ethnic minority (excluding White minority) adults, hence, the inclusion of ethnicity breakdown was of great importance.

The quotas used were set on broader bands within key categories of gender, age, ethnicity, employment, UK nations, and regions of England. Random Iterative Method (RIM) weighting was used for this study, such that the final weighted sample matched the actual population profile. Those respondents who prefer not to answer questions on age and ethnicity are weighted to 1.0, and those who do not identify as male or female are also weighted to 1.0.

This weighting of the very low digital familiarity (CATI) sample was used for the first time in Wave 2, Wave 1 data have not been weighted. Data from Wave 1 should therefore not be compared to data from Wave 3 and Wave 2.

Demographic Profile of the Online (CAWI) sample & very low digital familiarity (CATI) sample

The demographic profile of the online and very low digital familiarity (CATI) samples, before and after the weights have been applied, are provided in the following tables.

Online (CAWI) sample

Unweighted Unweighted Weighted Weighted
Gender        
Female 2274 54% 2173 51%
Male 1926 46% 2027 48%
Identify in another way 15 <1% 15 <1%
Prefer not to say 10 <1% 10 <1%
Age        
NET: 18-34 1152 27% 1175 28%
NET: 35-54 1382 33% 1391 33%
NET: 55+ 1691 40% 1659 39%
Socio-economic classification        
ABC1 2248 53% 2235 53%
C2DE 1977 47% 1990 47%
Region        
Northern Ireland 109 3% 116 3%
Scotland 336 8% 351 8%
North-West 477 11% 465 11%
North-East 174 4% 167 4%
Yorkshire & Humberside 362 9% 342 8%
Wales 205 5% 199 5%
West Midlands 375 9% 374 9%
East Midlands 305 7% 307 7%
South-West 366 9% 366 9%
South-East 561 13% 581 14%
Easter 385 9% 396 9%
London 570 13% 561 13%
NET: England 3575 85% 3559 84%
Ethnicity        
NET: White 3342 79% 3568 84%
NET: Mixed 175 4% 330 8%
NET: Asian 383 9% 142 3%
NET: Black 226 5% 71 2%
NET: Other 99 1% 114 2%
NET: Ethnic minority (excl. White minority) 883 20% 657 16%

Telephone (CATI) sample

Unweighted Unweighted Weighted Weighted
Gender        
Female 133 64% 108 51%
Male  76 36% 101 49%
Age        
NET: 18-64 52 25% 70 33%
NET: 65+ 146 70% 128 61%
Region        
NET: England 160 77% 170 81%
NET: Scotland, Wales, Northern Ireland 49 23% 39 19%
Ethnicity        
NET: White 186 89% 168 81%
NET: Ethnic minority (excl. White minority) 20 10% 38 18%

Wave on wave comparability

The same questions were asked in Waves 1, 2, and 3 of the tracker survey to enable comparison between the three time points. For Wave 3, as in Wave 2, we have replaced some questions from previous waves and added or removed some cases-studies and examples from other questions. In future waves we will rotate different items and questions into the survey at different intervals as annual data points are not required for all. Additionally, the question wording has been updated in some instances which are clearly marked in data tables with variable names marked ‘b’.

The following notable edits were applied to CAWI and CATI survey questionnaires in Wave 3:

  • New demographic question about highest educational level achieved to date in both the CAWI and CATI survey questionnaires.
  • Q1 and Q14: New additions in Wave 3 include ‘HR and recruitment services’ and ‘Banks and other financial institutions’.
  • Q15b and Q16: New additions in Wave 3 include ‘Cost of living’. ‘COVID-19’ has been excluded from the survey in Wave 3.
  • New questions on AI and data regulation have been added for Wave 3.
  • CAWI Conjoint: Change of conjoint design and content between Wave 2 and Wave 3.
  • CATI: New question asking about the greatest risks for data use in society (Q17b).

10.2 Analysis

The data from the CAWI survey has been analysed using a combination of descriptive, conjoint and regression analysis. The CATI data has been analysed using descriptive analysis only, due to its relatively smaller sample size (the conjoint module was not included in the CATI survey).

Statistical significance and interpretation

When interpreting the figures in this report, please note that only statistically significant differences (at a 95% confidence level) are reported and that the effect of weighting is considered when significance tests are conducted. Significant differences are highlighted in the analytical report and are relative to other directly relevant subgroups (e.g., those identifying as male vs. those identifying as female).

Digital familiarity score

A proxy score for digital familiarity has been used to break up respondents into three groups, using a score based on self-reported levels of confidence in using technology, and frequency of use of four digital services. The scores have been assigned as follows:

  • Q4b: Respondents are given a score of 3 for each digital service used ‘a lot’, score of 1.5 for used ‘occasionally’, 0 for ‘don’t do at all’, with a maximum of 12 points on this question.
  • Q5: Respondents are given points for every answer; ‘very confident’ = 12 points, ‘somewhat confident’ = 8, ‘not confident’ = 4, ‘not at all confident’ = 0, any other response = 0.
  • The total maximum score one can have from Q4b and Q5 combined is 24.

The distribution of scores was then analysed using Jenks method to identify logical divisions between the groups:

  • Low digital familiarity: 0-12.5 (428 respondents)
  • Medium digital familiarity: 13-19 (1630 respondents)
  • High digital familiarity: 19.5-24 (2167 respondents)

The CATI survey data has been treated as an extension to this means of dividing the data, providing a ‘very low digital familiarity’ group.

Conjoint Analysis

Conjoint analysis is a survey-based research approach for measuring the value that individuals place on different features and considerations for decision making. It works by asking respondents to directly compare different combinations of features to determine how they value each one.

A conjoint experiment was created to test preferences for four attributes within AI use scenarios. Those attributes were:

  • the AI application
  • the benefits of using an AI approach
  • the risks of using an AI approach
  • Associated governance measures to minimise the risks

The difference in responses allows the researcher to understand which attributes and items are driving preference.

The experiment tested pairs of AI use scenarios (where the items within attributes varied) asking respondents to select in which scenario they would prefer for AI to be used. Respondents were asked 5 pairs of scenarios; each scenario being allocated based on achieving as even a distribution of combinations as possible across the sample. Each pair could contain some identical characteristics, but the pairs as a whole could not be identical. Even with this methodology not all combinations can feasibly be shown though, this problem is addressed in the analysis phase of the work.

The data captured was analysed in Sawtooth using a combination of logistic regression and a hierarchical Bayesian (HB) algorithm. An individual respondent’s data was regressed to create utility scores; these scores can be considered to be the appeal of an attribute within the data sharing scenarios. These utility scores were then used to determine the likelihood of a respondent selecting an attribute or combination of attributes (the propositions displayed in the exercise).

The HB algorithm analysed an individual respondent’s utilities for the data sharing scenarios they were shown, compared them to the sample average and then estimated their likely choices for scenarios not shown based on their variation from the sample average.

The utility scores were transformed to show the likelihood that an AI usage scenario was selected, with a probability of 50% being the base probability, 100% being chosen every time, and 0% being never selected.

For further reading on HB analysis please refer to the Sawtooth website

Regression analysis

Regression analysis is used to test associations between different characteristics and responses, for example, to test for associations between demographic characteristics and attitudes toward data. This technique can identify the size and strength of these relationships, while holding all other variables in the model equal, but not cause and effect.

Logistic regression is used to test for associations between a single ‘dependent’ variable and multiple ‘independent’ variables. Logistic regression is used as many of the ‘dependent’ variables in this report are survey questions based on Likert scales and not continuous data. Therefore, the data is transformed into a binary variable with two categories, for example ‘agreeing’ with a statement inclusive of ‘strongly agree’ and ‘somewhat agree’, and ‘neutral or not agreeing’ with the statement inclusive of all other responses.

Logistic regression provides us with an ‘odds ratio’ (OR). This tells us the odds of someone with a particular characteristic or attitude reporting, for example, that they agree with a statement, compared with someone with another characteristic or attitude, after taking other possible influences into account. For example, regression analysis ran for Wave 3 of this tracker survey showed that non-graduates were less likely (OR = 0.79, meaning 0.79 times as likely) to think that collecting and analysing data is good for society, compared with graduates (Annex, Model 1).

A goodness of fit measure, Akaike Information Criterion (AIC), is reported with all models. This can be used to compare models with the same dependent variable and understand which one is the best fit for the data where a smaller AIC indicates a better fit. AIC penalises models for including more variables, therefore, variables that are not found to be statistically significant are removed from the final models iteratively.

We tested selected hypotheses using interactive effects on the type of data mentioned in the question and the individual’s demographic characteristics. Interaction effects occur in complex study areas when an independent variable interacts with another independent variable and its relationship with a dependent variable changes as a result. This effect on the dependent variable is non-additive, i.e., the joint effect of two variables interacting is significantly greater or significantly less than the sum of the parts. It is important to understand whether this effect occurs because it tells us how two or more independent variables work together to impact the dependent variable and ensure our interpretation of data is correct. The presence or absence of interactions among independent variables can be revealed with an interaction plot and interaction term effect can be included in an analytic model in order to quantify its significance. When we have statistically significant interaction effects, we have to interpret the main effects considering the interactions.

11. Annex

11.1 Overall attitudes to data

Model 1: Collecting and analysing data is good for society

Characteristic Odds ratio 95% Confidence level p-value
Age (decade) 0.90 0.86, 0.93 <0.001  
Education level        
Graduate    
Non-graduate 0.79 0.69, 0.91 <0.001  
Socioeconomic grade        
ABC1    
C2DE 0.87 0.76, 1.00 0.046  
Ethnicity        
White    
Asian 1.01 0.80, 1.27 >0.9  
Black 1.94 1.43, 2.65 <0.001  
Mixed 1.06 0.77, 1.47 0.7  
Other 0.82 0.48, 1.39 0.5  
Prefer not to say 0.14 0.03, 0.42 0.002  
AIC 5,457      

Model 2: Data is useful for creating products and services that benefit me

Characteristic Odds ratio 95% Confidence level p-value
Age (decade) 0.90 0.87, 0.93 <0.001
Regions      
London  
East Midlands 1.02 0.77, 1.37 0.9
Eastern 1.11 0.85, 1.46 0.4
North-East 1.49 1.04, 2.15 0.029
North-West 1.02 0.79, 1.31 0.9
Northern Ireland 1.05 0.69, 1.61 0.8
Scotland 1.17 0.89, 1.55 0.3
South-East 1.09 0.86, 1.40 0.5
South-West 1.28 0.97, 1.69 0.080
Wales 1.23 0.88, 1.73 0.2
West Midlands 1.30 0.99, 1.71 0.064
Yorkshire & Humberside 1.30 0.99, 1.72 0.059
Socioeconomic grade      
ABC1  
C2DE 0.73 0.65, 0.83 <0.001
AIC 5.569    

11.2 Chatbot usage

Model 8: For personal use, in your day-to-day life

Characteristic Odds ratio 95% Confidence level p-value
Gender      
Male  
Female 0.77 0.67, 0.90 <0.001
I identify in another way 0.31 0.07, 1.04 0.082
Prefer not to say 0.56 0.06, 4.21 0.6
Age (decade) 0.69 0.66, 0.73 <0.001
Regions      
London  
East Midlands 0.74 0.53, 1.05 0.090
Eastern 0.75 0.55, 1.04 0.085
North-East 0.65 0.42, 0.99 0.045
North-West 0.85 0.63, 1.13 0.3
Northern Ireland 0.76 0.47, 1.22 0.3
Scotland 0.74 0.53, 1.02 0.071
South-East 0.85 0.64, 1.14 0.3
South-West 0.82 0.59, 1.14 0.2
Wales 1.00 0.68, 1.47 >0.9
West Midlands 0.80 0.59, 1.08 0.15
Yorkshire & Humberside 1.07 0.78, 1.46 0.7
Education level      
Graduate  
Non-graduate 0.69 0.59, 0.81 <0.001
Ethnicity      
White  
Asian 1.27 0.98, 1.65 0.070
Black 1.80 1.30, 2.50 <0.001
Mixed 1.58 1.11, 2.25 0.010
Other 1.49 0.84, 2.66 0.2
Prefer not to say 1.25 0.44, 3.51 0.7
Awareness of AI      
Can explain AI  
Cannot explain AI 0.76 0.65, 0.89 <0.001
Trust in big tech companies      
Trust somewhat or a lot  
Do not trust much or at all 0.68 0.59, 0.79 <0.001
AIC 4,416    

Model 9: For work or in your job

Characteristic Odds ratio 95% Confidence level p-value
Gender      
Male  
Female 0.63 0.53, 0.74 <0.001
I identify in another way 0.69 0.17, 2.33 0.6
Prefer not to say 0.14 0.01, 1.43 0.12
Age (decade) 0.57 0.54, 0.61 <0.001
Regions      
London  
East Midlands 0.58 0.39, 0.85 0.005
Eastern 0.45 0.30, 0.66 <0.001
North-East 0.55 0.33, 0.89 0.017
North-West 0.83 0.60, 1.14 0.2
Northern Ireland 0.94 0.57, 1.55 0.8
Scotland 0.51 0.35, 0.73 <0.001
South-East 0.67 0.48, 0.92 0.013
South-West 0.58 0.40, 0.85 0.005
Wales 0.79 0.50, 1.22 0.3
West Midlands 0.64 0.46, 0.90 0.010
Yorkshire & Humberside 0.87 0.61, 1.22 0.4
Education level      
Graduate  
Non-graduate 0.60 0.50, 0.71 <0.001
Ethnicity      
White  
Asian 1.56 1.19, 2.05 0.001
Black 1.88 1.35, 2.62 <0.001
Mixed 1.63 1.13, 2.34 0.009
Other 2.54 1.40, 4.71 0.003
Prefer not to say 0.34 0.08, 1.07 0.10
Trust in big tech companies      
Trust somewhat or a lot  
Do not trust much or at all 0.65 0.55, 0.78 <0.001
AIC 3,473    

11.3 Attitudes on impact of AI in society

Model 10: Perceived impact of Artificial Intelligence (AI) on society overall

Characteristic Odds ratio 95% Confidence level p-value
Gender      
Male  
Female 0.56 0.48, 0.65 <0.001
I identify in another way 0.29 0.08, 0.99 0.055
Prefer not to say 0.00   >0.9
Regions      
London  
East Midlands 0.64 0.45, 0.91 0.013
Eastern 0.90 0.64, 1.25 0.5
North-East 1.03 0.68, 1.58 0.9
North-West 0.74 0.54, 1.00 0.048
Northern Ireland 0.83 0.50, 1.37 0.5
Scotland 0.96 0.69, 1.34 0.8
South-East 0.87 0.65, 1.17 0.4
South-West 0.70 0.50, 0.98 0.038
Wales 1.10 0.74, 1.65 0.6
West Midlands 1.04 0.76, 1.44 0.8
Yorkshire & Humberside 0.82 0.60, 1.14 0.2
Education level      
Graduate  
Non-graduate 0.77 0.65, 0.90 <0.001
Ethnicity      
White  
Asian 1.59 1.21, 2.09 <0.001
Black 2.38 1.66, 3.46 <0.001
Mixed 1.36 0.95, 1.97 0.10
Other 0.63 0.33, 1.19 0.2
Prefer not to say 0.67 0.24, 1.82 0.4
Awareness of AI      
Can explain AI  
Cannot explain AI 0.50 0.43, 0.59 <0.001
AIC 4,072    

11.4 Conjoint table

Model 1 Model 2
Significance Level: 95% a b
Total 2119 2106
Application: Artificial Intelligence (AI) would be used to…    
Mark student’s homework 33.2 34.2
     
Assess eligibility for welfare benefits 44.5 46.2
    a
Detect the presence of cancer from an x-ray scan 76.7 74.6
  b  
Assess the risk of failing to repay a loan 32.8 35.5
    a
Screen CVs to select candidates for job 40.4 42.1
    a
Identify people who need financial support to pay their energy bills 61.4 58.3
  b  
Benefits: This approach could be … than alternative approaches    
More accurate 49.3 49.9
    a
Faster 48.6 49.4
    a
Cheaper 47.6 48.3
    a
Less likely to unfairly discriminate 54.5 52.4
  b  
Model 1: Risks / Model 2: Risks with associated governance: However, there may be a risk that… / but the system will take steps to ensure that…    
People’s personal information could be stolen / personal information is safe and secure 38.2 46.2
    a
It will be difficult to understand how the technology makes decisions / the reasons for its decisions can be explained 56.0 47.9
  b  
The decision made by the technology will be difficult to challenge / people can choose to appeal the decisions that are made 52.3 51.4
  b  
It will be difficult to know who is responsible if a mistake happens / a human is always responsible for the decisions made 50.8 56.8
    a
The technology will be biased against certain groups / bias is identified and reduced 43.3 44.7
    a
People won’t know whether Artificial Intelligence is being used / it is made clear when Artificial Intelligence is being used 59.8 53.0
  b  
Overview    
Application 60.9% 63.5%
    a
Benefits 14.9% 12.7%
  b  
Risks 24.1% 23.8%
     

NB: Where scores have been identified as significantly different (represented by a bold letter: a or b), this indicates a statistically significant difference compared to the other Model at 95% confidence level.

  1. Throughout the report, only differences that are statistically significant (at a 95% confidence level) are reported. Due to the large sample size, even small differences can be statistically significant. 

  2. In this report, we categorise respondents into low, medium, and high digital familiarity groups based on a proxy score derived from their self-reported confidence in using technology and frequency of using four digital services. 

  3. It is important to note that awareness and understanding of AI was self-reported and should therefore not be treated as an objective measure. 

  4. Positive impact is defined as those who answer 8-10 at Q23b, while negative impact is defined as those who answer 0-3 at Q23b. Q23b is as follows: ‘On a scale from 0-10 where 0 = very negative impact and 10 = very positive impact, based on your current knowledge and understanding, what impact do you think Artificial Intelligence (AI) will have overall on society?’