Research and analysis

Deepfakes and media literacy

Published 27 May 2025

Rapid projects support government departments to understand the scientific evidence underpinning a policy issue or area by convening academic, industry and government experts at a single roundtable. These summary meeting notes seek to provide accessible science advice for policymakers. They represent the combined views of roundtable participants at the time of the discussion and are not statements of government policy.

What is the evidence for the impacts of deepfakes on voting behaviour? To what extent would a focus on media literacy mitigate any impacts?

Meeting notes from roundtable chaired by Tom Crick (Chief Scientific Adviser, DCMS) facilitated by the Government Office for Science.

22 March 2024

Key points 

  • Neither international nor UK-focused research has found significant differences in susceptibility to general disinformation among different sections of society.   

  • While public awareness of deepfakes can cause uncertainty and a lack of trust in institutions generally, it is not clear that deepfakes – nor mis- and disinformation more broadly – have significant impacts on voting choice. This is due to the difficulty of accurately attributing causation in data collected.

Evidence base around the impact of mis/disinformation on voting behaviour

1. There is no clear evidence that deepfakes are currently prevalent in the UK (they are hard to monitor), nor that they pose a serious risk to the integrity of elections – or a more serious risk than narratives or other materials that deliberately mix accurate information with disinformation to maximise their persuasiveness.   

2. There is no clear evidence that mis- or disinformation have significant impacts on voting choice (Eady et al., 2023; Bail et al., 2020; Guess et al., 2020).  

3. However, public awareness of deepfakes can cause confusion, uncertainty and a lack of trust in institutions generally, especially in the media and government (Vaccari & Chadwick, 2020; Weikmann et al., 2024).   

4. The psychological phenomenon known as “the third-person effect” means that we assume other people are more persuadable, or gullible, than ourselves (Osman, 2025; Hall et al., 2023). One study found that people are much less satisfied with democracy when they believe misinformation has influenced others more than it has influenced them (Nisbet et al., 2021).  

5. The cognitive processing of disinformation – including of deep fakes – is the same as the processing of accurate information (Harris, 2024). Processing of information across groups is quite uniform. However, certain groups may be specifically targeted by disinformation campaigns (Simchon et al., 2024).  

6. There is an intention-action gap in sharing misinformation: people generally hold a strong preference for accurate sharing (Pennycook et al., 2021) but will often share misinformation because their attention is focused on factors other than accuracy, such as in-group expectations.   

7. It should not be assumed that the digital sphere is the dominant source of disinformation, which can also spread initially through face-to-face encounters and within groups and communities and can then be reinforced in the digital sphere (Colin Strong; Ipsos, 2021).  

8. People’s interaction with any information is complex and nuanced. It is rational for people to potentially update and revise their beliefs based on additional information.  

9. Discerning disinformation is difficult. The media environment is now more diverse and fast-moving. It should not be assumed that people holding minority beliefs or believing disinformation are less media literate than those holding mainstream beliefs.   

10. The tagging of mis- and disinformation on key platforms such as WhatsApp is poorly understood by users (Hall et al., 2024), making it more difficult to judge what is real or fake.  

11. An analysis of how people respond to fact-checking interventions, such as FullFact, found that seeing the correction of false information did result in participants correcting their beliefs, but that this effect did not last in follow-up research (Horvath et al., 2024).

The role of media literacy

12. Media literacy is a useful tool for increasing awareness and recognition of AI-generated disinformation and for improving the public’s ability to critically access, analyse, evaluate and communicate messages in a variety of forms. It has been proven to build people’s resilience to disinformation.  

13. There was general agreement that common media literacy principles (such as assessing the source, content, plausibility, and purpose: TRUE Project, 2024) for online mis/disinformation are also applicable to detecting AI-generated disinformation.   

14. Quick, interactive engagement (gamification) has also been shown to have some efficacy as part of media literacy campaigns (Glas et al., 2023). Co-creation and participatory approaches may be particularly effective (Leurs et al., 2023). 

15. There is a broader question around the balance of responsibility between citizens, content providers and governments to identify and tackle deepfakes, or other AI-generated material.   

16. Citizens may welcome tools that help them to report deepfakes – not just for their own media literacy, but to contribute to research and improve social outcomes (Horvath & Mabbett, 2024).

Attendees

  • Tom Crick (Chair, DCMS)
  • Andrew Chadwick (Loughborough University)
  • Colin Strong (Ipsos)
  • Gabriela Jiga-Boy (Swansea University)
  • Jens Madsen (LSE)
  • Joanna Burkhardt (University of Rhode Island)
  • Jon Roozenbeek (University of Cambridge)
  • John W5 (NCSC)
  • Lasana Harris (UCL)
  • Laszlo Horvath (Birkbeck University)
  • Lee Edwards (LSE)
  • Magda Osman (University of Cambridge)
  • Max Mawby (Thinks Insight)
  • Yvonne McDermott Rees (Swansea University).

References

Bail, C.A., Guay, B., Maloney E., Combs A., Hillygus D.S., Merhout F., Freelon D. Volfovsky A. (2020). Assessing the Russian Internet Research Agency’s Impact on the Political Attitudes and Behaviors of American Twitter Users in Late 2017. PNAS 117:243–50  

Eady, G., Paskhalis, T., Zilinsky, J., et al. (2023). Exposure to the Russian Internet Research Agency Foreign Influence Campaign on Twitter in the 2016 US Election and Its Relationship to Attitudes and Voting Behavior. Nature Communications, Vol. 14, No. 62.  

Glas, R. et al. (2023) ‘Literacy at play: An analysis of media literacy games used to foster media literacy competencies’, Frontiers in Communication, 8. doi:10.3389/fcomm.2023.1155840.   

Guess, A. M., Nyhan, B., & Reifler, J. (2020). Exposure to untrustworthy websites in the 2016 US election. Nature human behaviour, 4(5), 472-480. 

Hall N-A, Chadwick A, Vaccari C, et al. (2024) Research update: Misinformation on personal messaging—are WhatsApp’s warnings effective? (Public report). Loughborough University. Available at: https://hdl.handle.net/2134/25211891.v1 (accessed 25 March 2024).  

Hall, N.-A., Chadwick, A. and Vaccari, C. (2023) ‘Online misinformation and everyday ontological narratives of social distinction’, Media, Culture & Society. doi:10.1177/01634437231211678. 

Harris, L.T. (2024) ‘The neuroscience of human and Artificial Intelligence presence’, Annual Review of Psychology, 75(1), pp. 433–466. doi:10.1146/annurev-psych-013123-123421. 

Horvath, L., & Mabbett, D. (2024). Project report: Exploring citizens’ responses to science in public policy through natural language processing and conjoint experiments. The British Academy. https://www.thebritishacademy.ac.uk/publications/exploring-citizens-responses-science-public-policy-through-natural-language-processing-conjoint-experiments/   

Horvath, L. Stevens, D., Banducci, S., Popp, R., & Coan, T. (2024) ‘Correcting campaign misinformation: Experimental evidence from a two-wave panel study’, Harvard Kennedy School Misinformation Review. doi:10.37016/mr-2020-132.  

Leurs, K. et al. (2023) ‘Participatory action research and media literacy’, in Media Literacy and Media Education Research Methods. Routledge.  

Nisbet, E.C., Mortenson, C. and Li, Q. (2021) ‘The presumed influence of election misinformation on others reduces our own satisfaction with democracy’, Harvard Kennedy School Misinformation Review [Preprint]. doi:10.37016/mr-2020-59.   

Osman, M. (2024) Disinformation is often blamed for swaying elections – the research says something else, The Conversation. Available at: https://theconversation.com/disinformation-is-often-blamed-for-swaying-elections-the-research-says-something-else-221579 (Accessed: 25 March 2024).   

Osman, M. (2025). Evidencing the Impact of Misinformed and Disinformed Beliefs on Individual and Group Behaviors. Psychological Inquiry, 36, 49-56. https://doi.org/10.1080/1047840X.2025.2482355 [Note: This was published after the roundtable took place and retrospectively included at the request of the author.] 

Pennycook, G. et al. (2021) ‘Shifting attention to accuracy can reduce misinformation online’, Nature, 592(7855), pp. 590–595. doi:10.1038/s41586-021-03344-2.   

Simchon, A., Edwards, M. and Lewandowsky, S. (2024) ‘The persuasive effects of political microtargeting in the age of Generative Artificial Intelligence’, PNAS Nexus, 3(2). doi:10.1093/pnasnexus/pgae035.   

Strong, C. (2021) Tackling conspiracy theories, Tackling conspiracy theories. Available at: https://www.ipsos.com/en-uk/tackling-conspiracy-theories (Accessed: 25 March 2024).  

TRUE Project (2024). Evaluating digital open source imagery: A guide for judges and fact-finders (2024), published online at www.trueproject.co.uk/osguide (accessed 13 May 2025). 

Vaccari, C., & Chadwick, A. (2020) ‘Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News’, Social Media & Society. https://doi.org/10.1177/2056305120903408 

Weikmann, T., Greber, H. and Nikolaou, A. (2024) ‘After deception: How falling for a deepfake affects the way we see, hear, and experience media’, The International Journal of Press/Politics [Preprint]. doi:10.1177/19401612241233539.