The right moment for digital safety
Published 12 September 2025
Acknowledgements
Our thanks to Professor Simeon Yates (University of Liverpool) and Frances Yeoman (Liverpool John Moores University) for their valued guidance and contributions to this project. We also thank our colleagues at the Department for Science Innovation and Technology (DSIT). Our acknowledgement also to our colleagues at BIT: Eva Kolker and Nilufer Rahim for their oversight and quality assurance during the project. Finally, we also thank DJS Research Limited for their support with recruiting our qualitative research sample and, most importantly, all the individuals who contributed their time and energy to participate in our interviews.
Executive Summary
Media literacy refers to the public’s ability to navigate online environments safely and securely. Addressing the gap between media literacy provision and uptake is a key challenge: a survey conducted by the Behavioural Insights Team (BIT) found that at least 23% of the UK population had never looked for information on media literacy.[footnote 1] We were commissioned by the Department for Science, Innovation, and Technology (DSIT) to explore how behavioural insights can be applied to address this issue.
While lack of engagement with media literacy resources is widespread,[footnote 2] our prior research also shows that when users engage with media literacy, it is sparked by a ‘moment of need’ - that is, a moment when they experience a need for media literacy skills or support.[footnote 3] These moments of need serve as an opportunity for media literacy providers to target providing the right skills and information at the right time - during a moment when people may be more likely to take up these resources.[footnote 4]
This research explores this concept of ‘moments of need’ to identify when and how these moments can be leveraged to encourage the uptake of media literacy resources to improve skills and knowledge. We conducted 33 semi-structured interviews with UK participants to understand how people perceive moments when they may need media literacy support, and what they do when they encounter these moments. We explored the barriers and facilitators to participants perceiving and responding to three specific moments of need:
- Encountering mis/disinformation online
- Encountering hateful or abusive content
- Encountering risks to children’s online safety
We also asked participants about their experiences engaging with media literacy resources as a mechanism of support. Participants varied in their level of digital inclusion [footnote 5] as well as other socio-demographic characteristics.
A key implication of this research is that adults and children are regularly exposed to an overwhelming volume of online content and information - both benign and harmful – that hinders their ability to proactively take action to protect and educate themselves. Upstream, platform-level interventions such as downranking harmful content to reduce its circulation may be the most effective to prevent harm - and research shows that individuals support these measures as well.[footnote 6] This should be done in conjunction with downstream interventions, particularly those that support people to recognise when a moment of need may call for greater engagement, and provide them with the skills and confidence to respond effectively. Based on the findings of this research, recommendations for organisations working to improve media literacy include:
- The need to develop resources that address the identified skills gap for parents with low levels of digital inclusion in what to do when encountering challenges to their children’s online safety. This work also indicates the need for further research into identifying skills and capabilities gaps for other priority groups and moments of need.
- Leveraging social networks such as friends and family both in media literacy interventions as well as in message and communications framing.
- Promoting the use of consistent choice architecture across online platforms for key media literacy actions such as reporting inappropriate content or misinformation - for example, having a consistent ‘three dots’ menu.
Key Findings
Awareness and the Paradox of Passivity. Participants were aware of the potential risks across all three journeys, understanding the prevalence of potential misinformation, of hateful content online and of a range of dangers when it came to keeping their children safe online. For example, when it came to their own consumption of content online, participants considered the sources and platforms they went to or used to ensure they were consuming news or information they trusted and considered safe. Once that decision was made, however, they took a passive approach, often choosing to ignore and scroll past any problematic content such as potential misinformation or hateful or abusive content. It was rare for participants to consider other responses such as verifying information they saw online, or on-platform responses such as reporting/blocking content and users. This is primarily due to the cognitive overload of problematic content - given how often participants reported encountering this kind of content, they preferred to scroll past it rather than take action each time. Further, there was a level of mistrust or cynicism that their actions would not lead to any positive outcome. This revealed a paradox: while participants recognised or perceived a moment of need when encountering problematic content, they chose to be passive rather than responding or taking any further actions.
Parental motivation and ability were moderated by level of digital inclusion. When it came to children, parents were more cognisant of the range of issues their children could encounter, and responded with a mix of talking with their children about safety, and implementing certain safeguards and restrictions. Parents’ motivation and ability to respond to challenges regarding the child’s online safety was heavily influenced by the parents’ own level of digital inclusion. High digitally-included participants were more likely to have regular conversations across a range of topics compared to low digitally-included participants, who generally were more reactive to specific experiences or triggers in their child[ren]’s lives. High digitally-included participants also implemented a broader range of controls and strategies to keep their children safe, including router level controls, app store restrictions, explicit rules on social media, blocking certain programmes, and parental controls on TV. Low digitally-included participants had less awareness and knowledge of all of these strategies, or tried and found them difficult.
Barriers to taking action upon encountering misinformation, hateful or abusive content, or challenges regarding their children’s safety online.
- Cognitive overload and disengagement. Cognitive overload from the volume of content participants encountered or from the wide range of potential risks their children could face paradoxically led to disengagement from action. Negative news and its associated emotional toll also led people to reduce engagement with online content altogether, while difficulty navigating contradictory information from different sources also discouraged action.
- Skills gap and choice architecture. A lack of knowledge and/or digital skills, or a poor self-assessment of these also served as a barrier to action. This may have been exacerbated by complicated and inconsistent online choice architecture, which made it difficult for participants to navigate settings across platforms.
- Lack of personal connection. Participants were sometimes indifferent to some of these moments of need (such as when encountering hateful content) because it did not affect them personally.
- Concerns around reporting. Reporting concerns was a daunting prospect, fueled by anxieties about potential repercussions or negative past experiences.
- Parenting challenges. For parents, difficulties in having conversations with their children about online safety without leading to conflict, and balancing the child’s desire for privacy and autonomy with the need to protect them were key challenges.
Facilitators to taking action
While across the first two journeys people rarely took action, a few factors facilitated some response to encountering misinformation and hateful speech, as well as facilitating parents to respond appropriately to challenges their children face online:
- Support networks. Having a support network to help to build digital skills was seen as valuable.
- Trusted messengers. Particularly for parents, having access to support and resources from schools, online websites and groups was beneficial. When it came to their own news consumption as well, people relied on trusted messengers as a strategy to avoid misinformation.
- Consistent choice architecture. Across journeys, participants reported using the ‘three dots’ menu to access settings. Consistent choice architecture across platforms and easy reporting mechanisms were also seen as facilitators to taking action.
- Personal connection. Having a personal connection to news or hateful content, such as information or posts that could affect them or their loved ones, was seen as a facilitator to taking action when encountering misinformation.
Experiences and impressions of media literacy resources
While participants generally were unaware of existing media literacy resources available, feedback on those presented suggested a preference for user-friendly, visually appealing platforms that are actively promoted through trusted channels like schools, television advertisements, or even social media. Feedback on the resources available was mixed, with some finding them useful and comprehensive, while others expressed scepticism about their effectiveness and perceived bureaucratic barriers on government websites should resources be made available there.
1. Introduction
Media literacy refers to the public’s ability to navigate online environments safely and securely. It is underpinned by five key media literacy principles, including privacy protection, understanding of digital environments, critical analysis of content, awareness of real-world consequences, and ability to engage positively online.[footnote 7] As we spend an increasing share of our lives online, ensuring that people are equipped with the right media literacy support and resources is crucial to help them to navigate an ever-changing online landscape. The UK Online Safety Act 2023 aims to protect users from harmful online content by requiring social media companies and search services to implement safety measures. It focuses on safeguarding children from age-inappropriate content and ensuring adults have control over what they see. Ofcom also has targeted media literacy duties under the Act, which involve promoting users’ understanding to participate effectively and safely online. Media literacy activities support this by teaching users to critically evaluate online information, enhancing their ability to navigate the digital world safely.
However, research shows that in the UK, individuals generally lack the skills and knowledge required to develop strong media literacy capabilities, and therefore protect themselves from risk and harm.[footnote 8], [footnote 9] Despite having a rich landscape of media literacy provision, with over 170 organisations delivering or having delivered initiatives in this space,[footnote 10] UK adults’ take-up of these media literacy provisions and training is low: research from the Behavioural Insights Team (BIT) found that at least 23% of the UK population had not looked for any information on media literacy, and a majority had only looked for information on three or fewer of the five media literacy principles outlined earlier.[footnote 11] Bridging this gap between the provision of media literacy initiatives and their uptake is crucial to ensure users feel empowered and safe online. We were commissioned by the Department for Science, Innovation, and Technology (DSIT) to explore how behavioural insights can be applied to address this issue.
In a previous research project, we found that, while lack of engagement with media literacy resources was widespread, users reported that when they did engage in media literacy, it was sparked by a ‘moment of need’ - that is, a moment when they experienced a need for media literacy skills or support.[footnote 12] Moments of need can be varied: for example, when using a new online service, users may have a need to understand how the recommendation algorithm works. Or when a user’s child wants to create an account for a social media platform, the parent may need support in understanding and setting up appropriate parental controls. These moments of need serve as an opportunity for media literacy providers to target providing the right skills and information at the right time - during a moment when people may be more likely to take up these resources. This aligns with the concept of ‘timely moments’ within behaviour science – targeting people at these specific moments is more likely to encourage behaviour change.[footnote 13]
This research explores this concept of ‘moments of need’ further in order to identify when and how these moments can be leveraged to encourage the uptake of media literacy resources to improve skills and knowledge.
We were guided by the following research questions:
- Where in their daily lives do people encounter a moment when they need media literacy skills?
- What do people do at these moments of need and why?
- What type of resources are people likely to engage with and where should this information be hosted?
This project consisted of two phases of research: 1) an initial rapid evidence review to identify which moments of need would be most impactful to target; and 2) in-depth qualitative research with a range of UK adults having differing levels of digital inclusion. This report summarises the key findings from the qualitative research.
1.1 Overview of qualitative research
We conducted 33 semi-structured interviews aimed at understanding individuals’ perceptions and actions upon encountering an online moment of need they may require media literacy support.[footnote 14] Based on the evidence review and feedback from DSIT and academic partners, we explored the barriers and facilitators to participants perceiving and responding to three specific moments of need:
- Encountering mis/disinformation online
- Encountering hateful or abusive content
- Encountering risks to children’s online safety
We also asked participants about their experiences engaging with media literacy resources as a mechanism of support.
We were guided by the following research questions:
- How do individuals perceive the identified moments of need?
- How do individuals respond to the identified moments of need? What actions, if any, do they take? Why and how?
- What are the barriers and facilitators to individuals engaging with existing media literacy resources or support upon encountering these moments of need?
These semi-structured interviews were also aimed at understanding how people’s actual journeys and actions aligned with the ideal actions DSIT would like users to take upon encountering these moments of need. With guidance from DSIT, we drafted three ‘idealised’ user journeys for each moment of need. Data from the semi-structured interviews were used to map out the ways in which people’s actions aligned and differed from these journeys, helping us to identify potential touchpoints for further investigation and intervention. However, these user journeys should be understood as aspirational models that presume a highly engaged, well-resourced, and motivated user. They do not fully account for the cognitive, emotional, and practical constraints that may limit individuals’ ability to follow such paths in practice, and are therefore not intended to serve as definitive or universally applicable solutions.
We detail our methodology below, followed by our key findings organised by journey. Research questions 1 and 2 above are explored within the findings for each journey, and research question 3 is explored in the final findings section prior to the conclusion.
2. Methodology
We conducted 33 online semi-structured interviews, lasting 60 minutes each. These were designed to probe participants’ perceptions and actions across each user-journey for each moment of need.
We conducted 23 interviews exploring people’s views and actions on the first two journeys (covering both misinformation and hateful/abusive content). We then conducted 10 interviews exploring the third journey on parenting.
The interviews were aimed at understanding participants’ real life experiences with each moment of need. However, where individuals had not encountered such a need and/or struggled to recall specific instances, we provided illustrative examples and explored their hypothetical responses.
2.1 Sampling
We spoke to individuals with varying levels of digital inclusion and a mix of socio-demographic characteristics such as gender identity and age. Detailed sampling matrices can be found in the [appendix].
Of the 23 participants we interviewed for journeys 1 and 2, 14 had low digital inclusion levels, 5 had medium digital inclusion levels, and 4 had high digital inclusion levels. Of the 10 parents we interviewed, 4 had low digital inclusion levels, 2 had medium digital inclusion levels, and 4 had high digital inclusion levels.[footnote 15]
The sample of parents varied in terms of their children’s age, the number of children they had, and whether they were single- or couple-led families.
We cross-pollinated any relevant insights participants expressed on the journeys they were not explicitly interviewed about (for example, including insights on children’s online safety expressed by the journey 1 & 2 interview participants).
Limitations
This report summarises participants’ self-reported perceptions and actions in relation to the content they see online, which may not cover all the actions they take in their lived reality. While this risk is greater for research exploring behavioural responses to more sensitive topics, social desirability bias may have played a role in participants potentially matching their responses to what they assumed the researcher would like to hear. We have attempted to minimise the risks of social desirability bias by probing participants carefully during the interviews, at multiple different junctures. We also reassured participants about confidentiality of their responses and their anonymity at the beginning of the interview. The candour of their responses on their actions and beliefs as well as the rapport established at the start give us comfort that the impact of social desirability bias is minimal.
We also had some limitations with our sample:
- As there is no established measure for digital literacy, we used a more widely used measure of digital inclusion instead as a proxy. While levels of digital inclusion align with digital literacy levels, there may be divergences (for example, a highly digitally literate person who chooses to use the internet sparingly).
- We chose to oversample low and medium digitally-included individuals, particularly for journeys 1 and 2, as these groups are most likely to require media literacy support and resources. However, this means that comparisons with high digitally-included individuals are limited.
3. Key Findings
3.1 Journey 1: Encountering mis/disinformation online
RQ1. How do individuals perceive the identified moments of need? People instinctively identified moments of need when it came to viewing misinformation, even if they did not term them as such. They used a range of heuristics or shortcuts to check legitimacy of news, but did not use these strategies consistently or regularly. People consumed content passively, and scrolled through large volumes of content relatively quickly. Participants expressed a general distrust of both online news, due to misinformation concerns, and mainstream media, due to perceived bias, creating challenges in navigating online information. Participants exhibited a negativity bias, preferring to disengage from news in general.
RQ2. How do individuals respond to the identified moments of need? What actions, if any, do they take? Why and how?
The most common response was to ignore it and scroll past. People rarely did more than this, and further actions were rarely taken, both on-platform (such as reporting) or off-platform (such as independently searching for or verifying the information).
If they do take any action, what are these? Some participants mentioned trying to verify information by searching online (primarily Google), but they faced challenges navigating search results and lacked confidence in their skills.
Key barriers to individuals taking action include:
- Cognitive / information overload from overwhelming amounts of online content.
- Deliberate ignorance bias, or the active decision participants may be taking to disengage with online news due to perceptions around its negativity, emotional toll, etc.
- Challenges navigating contradictory information, especially between mainstream and social media. Existing notions of the trustworthiness of mainstream and online news also affected motivation to take action.
- Lack of knowledge and/or digital skills, or a poor self-assessment of these.
One strategy to reduce the amount of misinformation they encountered was choosing where to get their news.
Other facilitators to taking action when encountering misinformation were having a personal connection to the news (such as news that could potentially affect their children) and having a supportive social network (friends and family) that helped them build digital skills. Consistent choice architecture across platforms (‘three dots’ menu) and easy reporting mechanisms were also seen as facilitators to taking action.
In this journey, we asked participants about the following:
- Their consumption of news (offline and online)
- Where they encountered this news online and in what format
- What their process for assessing the quality of online news is, and
- What actions they have taken upon encountering misinformation or information they do not trust, or their hypothetical actions if they encountered such content.
We were guided by the following idealised user journey. However, as noted previously, these user journeys should be understood as aspirational models that presume a highly engaged, well-resourced, and motivated user, and are not intended to serve as definitive or universally applicable solutions.
Figure 1. Idealised user journey upon encountering misinformation
3.1.1 Profile of participants
Low and medium digitally included participants saw themselves as ‘limited’ internet users, highlighting their underconfidence and wariness of using the internet. They used descriptions like not being ‘tech savvy’ and said they depended on guidance and support to use the internet from their children and others in their social network who they saw as more technologically adept. For these participants, infrequently going online was also a product of their jobs not requiring them to use the internet. Social media was the most common platform accessed online.
However, it is important to note that with some of these participants, there was incongruence between their perception that they infrequently go online compared to their actual usage (for example, checking their phone multiple times a day). Thus, there may be some degree of misperception by participants of their own internet usage and comfort.
High digitally-included participants were more likely to express confidence and comfort being online, although there was a group that still expressed some wariness around online shopping and banking. They indicated they went online more, and/or used the internet for their jobs.
3.1.2 Reading, watching, and assessing the quality of news
Mainstream media enjoyed an advantage owing to trust and familiarity.
Participants highlighted that television, newspapers, and radio were the most common sources of news and information. These primarily comprised mainstream local and national news sources (BBC, The Sun, The Telegraph, ITV, Channel 4, Sky News / Sports, Radio 4, Good Morning Britain, Manchester Evening News, etc.). Primary reasons for reading, watching, listening to, and trusting these sources included: these are well known and well established sources, often having been around for generations (for example, participants’ parents also used these sources). In our sample, older participants (61+ years) were more likely to use these news sources, while younger participants relied on a more mixed set of sources.
Online news consumption is on the rise, but participants expressed mistrust even as they consumed it.
Reliance on online news sources was less widespread across the sample. Younger participants (45 and under) were more likely to consume online news via Facebook, Twitter/X, YouTube, or TikTok. Among those who preferred online news, the preference stemmed from a lack of time, lack of trust in mainstream media due perceived ideological bias, costs of subscribing or paywalls to access mainstream media, and the ability to tailor the news they see to their interests on social media platforms. When asked which online voices they personally trusted, some participants mentioned YouTube podcast presenters (for example, Joe Rogan, Piers Morgan) and TikTok influencers (for example, NewsDaddy). Participants who trusted these sources often distrusted mainstream media.
Older participants (46+) were more cautious of online news, often warned by family not to trust it. This mistrust, especially among those aged 61+, arose from fears of scams and a lack of tech-savviness, leading them to prefer established offline sources like newspapers and TV. Participants described feeling paranoid or overly cautious about online engagement, fearing mistakes or fraud.
As a result of this mistrust, participants suggested that they simply would not believe what they read online unless it was echoed or repeated in a trusted or established offline news source like in the newspaper, on TV, etc.
“Sometimes with the internet, you don’t know how true it is […] when it’s on the television you know it’s true.” Low digitally-included participant, 46 - 60 years
Negativity bias, where negative news is more salient, may be putting some people off consuming news altogether.
Some participants in the sample, particularly those with medium and low levels of digital inclusion, indicated that they tried to avoid engaging with news - both offline and online - as much as possible, painting a picture of news being ‘negative’, and full of ‘doom and gloom’. They did not express much interest in actively consuming news, and instead, if at all, were more passive consumers, reading news that popped on their social media or leaving a TV news channel on in the background.
Passive consumption appears to be the norm on social media, limiting exposure to diverse viewpoints and limiting active efforts to verify news.
Participants suggested that when they engaged with news on social media, mainly Facebook, they only did if it interested them, often ignoring other types of content.
“If it’s local news to me, I’d probably click into it […] I think it just depends on what it’s about […] Is it going to affect me? Like, I don’t know, is it about an airport being closed that I might need to be aware of? Or the M62 is closed? It’s kind of like is it relevant [to me]? But if it’s not […] [like] a fire happening in Spain […] then I probably wouldn’t click on it.” High digitally-included participant, 46 - 60 years
Understanding of content algorithms was limited, though some participants realised they saw posts due to past interactions. Among those who were aware of algorithms, there was a mixed view on the value of these: while some appreciated algorithms for delivering tailored content, others expressed frustration or annoyance at seeing posts they did not actively choose to see (by, for example, following the page).
“Sometimes [news] will pop up unannounced but I don’t tend to go into it to find the news, no […] it sort of just tends to come up there now and again. I must have clicked on it once for it to keep coming back again […] It’s a bit frustrating when you don’t ask for something and it pops up unannounced.”Medium digitally-included participant, 61+ years
“I use YouTube on TV to pick the [news] clips I want to watch because there’s an algorithm involved so it knows what I like and what I’ve been watching so it gives me stuff I want to watch anyway. Whereas, I always feel like with the news, when you’re watching, you’re not really in charge and some of it can be really boring.” Low digitally-included participant, 31 - 45 years
This passive consumption meant participants rarely assessed the quality of posts, usually ignoring or sceptically consuming them. Generally, participants focused on whether news was relevant or interesting rather than trustworthy. This focus on passive consumption also meant that participants said they would not regularly share or comment on these posts.
People use several shortcuts instinctively to check legitimacy of online news, but do not use these strategies consistently or regularly.
Participants indicated that they rarely chose to actively assess the legitimacy of news using formal or consistent processes. Rather, participants chose to engage in ad hoc strategies, as and when they wanted to. We outline below some hypothetical strategies that people said they would use to evaluate news quality.
- Source recognition was the key indicator of legitimacy, rather than any independent assessment of source credibility, potential bias or agendas: The source of information was primarily used to evaluate its credibility, suggesting messenger effects influence perceived legitimacy. Participants trusted recognised and long-established sources like BBC, Sky, ITV, and LBC. While some distrusted mainstream media, they trusted popular social media channels and influencers like Piers Morgan, Joe Rogan, and NewsDaddy, primarily as these were seen as ‘professional’ and more truthful. These sources preferred by some participants were rarely scrutinised further, with limited acknowledgement that news sources might have their own agendas and political ideologies.
- Visual cues were another common indicator of legitimacy: Professionalism in presentation, good grammar, and good quality photos accompanying articles were seen as signs of legitimate news, while poor presentation, bad grammar, spelling errors, usage of bad language (swearing) were all highlighted as indicating untrustworthiness.
- Comments were another source of legitimacy: Some participants read comments from other readers to gauge news trustworthiness, though it’s unclear how they assessed these comments’ reliability.
- Lack of anonymity: Information from non-anonymous, verifiable sources was trusted more.
- Lack of click-bait or tabloid-like content: Sensationalist headlines and mismatched content reduce perceived legitimacy. One participant noted that they were wary of content that seemed to influence beliefs or purchases.
- Recent publication: Participants would look at the date of articles, avoiding those that were a few years old.
Case Illustration 1: Strategies to assess the quality of news
“Sam” is a man in his mid-30s. He doesn’t use the internet much: mainly to stay in contact with his family and to answer emails for work. He regularly gets his news from newspapers and from watching TV shows like Good Morning Britain.
Sometimes friends and family send him news on WhatsApp - he likes reading and engaging with these articles, but also uses different ways to assess quality.
The main indicator of legitimacy he looks at is the link of the news article. The first half of the link indicates the outlet: if it’s a source he trusts and finds reliable like the BBC, the news has more credibility. On the other hand, he would trust the post less if it has a link from a random person on social media.
He would then look at how the information is presented - whether it looks professional or manipulated. He also assesses whether he’s being sold something or whether he’s being influenced to benefit the person sharing or posting the information.
Finally, he would look at the person or page posting the information and assess whether they seem reliable or trustworthy.
There’s so much false information out there that’s manipulated and you don’t know what’s true and what isn’t. I don’t like to fall into the trap of being one of these people who’s like, ‘ah have you seen that [on the news]?’ and it’s just fake news, basically.” Low digitally-included, 31 - 45 years
3.1.3 Encounters with misinformation and actions they take
Participants reported not often seeing or recalling examples of misinformation online.
This might be due to difficulty in identifying misinformation during interviews or inconsistent use of strategies to assess news quality. According to DSIT’s idealised user-journey (Figure 1), when people are not able to identify misinformation, they should develop skills to identify false content, seek credible support, and engage with educational resources. However, these suggestions may be too demanding for some, and simply ignoring false information can also be a valid response. Interviews revealed that participants were generally demotivated or indifferent to critically analysing online content, particularly on social media, highlighting barriers to actively engaging with misinformation. We highlight these barriers to more actively engaging with misinformation in section [3.1.5].
Some examples of misinformation that participants mentioned include:
- False reports on celebrity illness or death;
- Articles with misinformation on products, for example, ‘miracle health cures;’
- Misinformation related to Covid vaccines or other Covid related information;
- Sports related disinformation (for example, around football club transfers);
- False reports about missing children.
Some participants spoke of misinformation that others in their social network (mothers, brothers, partners, etc.) actively consume such as:
- Conspiracy theories around Covid and vaccination;
- Misinformation on investing, particularly in cryptocurrencies;
- ‘Clickbait’ or manipulated news on national / international events such as the war in Ukraine, Gaza, etc.
Covid as a turning point The Covid-19 pandemic and lockdowns stood out as a particularly key juncture for participants when it came to mistrusting ‘mainstream’ media, potentially moving towards trusting conspiracy theories or increasingly relying on social media for news and information. For example, one participant noted an increase in negative vaccine experiences shared on Twitter during lockdowns, reinforcing her pre-existing reluctance to get vaccinated. This existing aversion was built on information she received from (self-described) ‘alternative’ news channels like LBC, where she felt people had more freedom to express their opinions and were less ‘controlled’ than with other mainstream media channels. Conflicting information from authorities and scientists during the pandemic, compounded by her child’s negative Covid experience, further deepened her scepticism towards government advice. Similarly, another participant said she remained sceptical of the Covid vaccine after seeing information and videos about it all over Facebook. Other participants spoke of people in their social network falling into ‘rabbit holes’ of Covid conspiracy theories on Facebook and YouTube which increased their mistrust in the government. They spoke of finding it difficult to convince these individuals to search for information or evidence from other sources.
3.1.4 Responses to misinformation
The main action that people took when encountering (potential) misinformation online was to scroll past it.
Participants generally indicated a passive response to encountering misinformation online, typically opting to scroll past or ignore it without engaging further. This was a common pattern across all participants, reflecting a general indifference towards verifying the accuracy of posts. Participants viewed the process of fact-checking as cumbersome and time-consuming, with uncertainties about where to find reliable information.
This general indifference to misinformation meant that we were only able to explore the potential actions participants might consider, rather than understanding the actions people had actually taken. This may also be due to the fact that our sample was more heavily skewed towards low and medium digitally-included individuals, whereas the high digitally-included participants in the sample seemed more proactive. Participants suggested that the main factor that would prompt them to engage in these actions in real life is if they felt personally affected by the news - for example, if it is related to information that might impact their children.
“You’ll never find out what’s going on […] it’s a waste of time and I’ll just enjoy the moment.” Low digitally-included participant, 18 - 30 years
When probed on other actions they might consider taking beyond scrolling past (even if they rarely do so), participants said they would Google it.
A common potential response to encountering misinformation was verifying it via a search engine, specifically Google. However, participants described feeling overwhelmed by the abundance of search results when using Google to verify information and had developed different heuristics or rule-of-thumb to processing search results:
- Top result bias: Some participants clicked on the first search result assuming it to be the most relevant and trustworthy. However, there was a lack of awareness that these results could be sponsored (paid for placement).
- Consulting initial sources: Some suggested that they would start at the top and then go through a few other sources to get a more conclusive answer. Initial sources were viewed as more trustworthy than later sources.
- Google’s summary: Some participants relied on Google’s summary provided on the search results page before verifying it with additional sources.
- Trusted sources: Participants also indicated a preference for clicking on sources they recognised and trusted, such as well-known media outlets like BBC or reputable magazines like Men’s Health. However, identifying reliable sources was seen as challenging and often relied on trial and error.
- Concerns about scams: There was a general apprehension about clicking on links within search results due to fears of scams or viruses, which further hindered participants’ confidence in using search engines effectively.
High digitally-included participants expressed more confidence with their Googling strategies. In comparison, low and medium digitally included participants were more unsure or under confident about their strategies, or were likely to simply go with the top result without further critical thinking.
However, verifying via Google was not everyone’s first instinct - other strategies included:
- Directly searching a trusted news source’s website (for example, BBC)
- Searching within social media websites like Facebook, Twitter, TikTok or YouTube. It is unclear how these participants parse through the information they receive from these sources during the verification process.
- Checking the information with family and friends. However, these participants acknowledged that this can be difficult when faced with contradictory information, or circular conversations.
- Looking at the comments on the post.
Importantly, participants expressed confidence in their own knowledge base, saying that they would not choose to verify information, preferring to rely on their instincts to identify misinformation.
Participants rarely reported misinformation online.
If participants chose not to scroll past news they found unreliable, their next response would be to try verifying or finding out additional information about these posts. However, they rarely suggested engaging in further actions after such verification, for example, reporting this content for being misinformation. Some said they might consider clicking on the ‘X’ button, or clicking on the ‘three dots’ at the side of a post and choosing ‘See fewer posts like this’ [Figure 2] as an immediate next step.
Figure 2. Screenshot of options to take action against a post on Facebook.
Some highlighted that they would trust future news from a source less if they found that it had shared or posted misinformation.
Case illustrations studies 2 & 3: Contrasting approaches to misinformation
“Lillian” is a woman in her late 20s who tries to limit the time she spends online. She dislikes engaging with the news because she finds it very negative. While she deleted her account recently, she recalled seeing a lot of news pop up on her Facebook newsfeed, which she generally trusted. She never verified these posts and would simply scroll past them.
She once saw a news article posted by a famous celebrity and gossip magazine. She read through the comments which pointed out it was fake news. She stopped engaging with the post after that. But she didn’t take any further actions and she continued consuming content from the magazine because she enjoyed it and felt there was more ‘true’ news than ‘false’ news.
“Rahul” is a man in his mid-40s who enjoys using the internet regularly. He uses a variety of platforms on a daily basis. He is very engaged with the news - he regularly watches the news on TV and has news apps on his phone with tailored notifications for news articles that match his interests.
He is very aware of the extent of misinformation on social media. For example, he has seen posts that look like they are screenshots from a mainstream newspaper but the content has been edited in some way. He has seen photoshopped images of public figures, posts where people have spun or cherry-picked information, or propaganda around big political events such as the war in Ukraine. He thinks it is so easy to set up an account and post online - so it is hard to know what is legitimate or not.
As a result, he’s very careful about the news he consumes: he always does his research and checks his sources. If he sees genuine misinformation attracting a lot of interest, he says he would comment on the post with a link to genuine information with a paragraph warning people, and encourage them to block and report the page. He thinks platforms should be much more proactive in addressing misinformation - for example, by working with fact-checking websites like Snopes.
3.1.5 Barriers to responding to misinformation
Overall, participants indicated limited motivation to engage in further actions beyond scrolling past when encountering misinformation.
According to DSIT’s idealised user-journey (Figure 1), upon encountering and identifying misinformation, individuals should:
- Attempt to verify the source of the information.
- Explore whether this information has been shared by more trustworthy or reputable sources.
- Explore whether fact-checkers have addressed this information.
- Check their emotional response to this information - do they feel a certain way about it because it reinforces their existing biases?
- Search for support that provides clear guidance on how to establish the veracity of information from trusted sources if they do not feel equipped with the skills to take the previous actions.
While participants said they would consider options 1 and 2, they indicated indifference or a lack of motivation to follow through with these actions. Given this, the suggestions outlined in DSIT’s idealised user-journey may be too demanding for some, and for others, simply ignoring false information may feel like a more manageable response.
“I find that I’m just constantly scrolling. I never really stop to like read something […] I wouldn’t analyse something enough to have a thought of ‘is it fake news, is it not’ […] I’m not on social media a lot. If I was in an office job […] then maybe I might have a little more time to analyse things more. But because I don’t, I just feel like I might read [the news article] or I might not.” Medium digitally-included participant, 46 - 60 years
Our analysis suggested the following reasons underpin this indifference or lack of motivation:
- Cognitive or information overload from online content: Participants often scrolled through large amounts of online content, particularly on social media, without giving specific attention to what they saw. Participants also indicated they had limited cognitive capacity, time, and resources, to process and assess the amount of information they saw regularly. This overload led to cognitive disengagement from misinformation and available tools and resources.
- Deliberate ignorance: This cognitive overload or desire to conserve cognitive resources may also have intersected with participants’ deliberate ignorance bias. This is the active decision to not engage with or access information or knowledge due to perceptions around its negativity, emotional toll, or relevance to their lives, among others. This deliberate ignorance may also have related to people’s perceptions of the role of the internet, particularly social media, in their lives - seeing it more as a tool for entertainment than news.
“I love a good google fact check […] but whenever I’m in my online TikTok surge for about an hour a day, it’s my lazy time, it’s my down time, and I cannot be bothered at that moment in time to open another app […] even if it’s a quick search.” Low digitally-included participant, 18 - 30 years
- Challenges navigating contradictory information: Verifying the truth amid conflicting reports from different sources, including mainstream media and social media platforms, posed a significant challenge. Participants struggled to discern credible sources, suggesting that even information presented as ‘factual’ could be cherry-picked and have biases or ideological influences. One participant spoke about the difficulty with just relying on ‘mainstream’ news while establishing the ‘truth’, pointing to the coverage of the war in Gaza and how coverage on social media has diverged remarkably from coverage in mainstream media. He reflected that, with social media, “everyone has become a journalist,” so he found it even harder to find factual journalism or understand what a credible source was. Similarly, another participant (high digital inclusion) spoke of misinformation not necessarily only being ‘fake’ or ‘made up’ but cherry picked information, misleading interpretations of events, “spinning” information, etc., which was harder to verify or do further research into.
- Social proof: Trust in news sources varied widely. Some trusted government posts while others distrusted mainstream media and preferred alternative sources for what they perceived as more truthful coverage. Participants also exhibited confirmation bias - that is, preexisting notions around news sources such as whether they generally found ‘mainstream’ news trustworthy or not, impacted whether or not they would trust the news published by a particular source.
- Lack of knowledge and/or digital skills: Participants, especially those less digitally-included, lacked the knowledge or confidence to take actions such as muting, blocking, or reporting misinformation. They often relied on others or sought guidance from places like Google for support.
- Self-efficacy bias: Participants presented underconfidence and poor self-assessment of their digital skills and capabilities. Low and medium digitally-included participants in particular, even when capable, underestimated their ability to handle misinformation, citing a lack of tech-savviness as a barrier to taking proactive steps. For example, participants who assessed themselves as being dependent on others for support using the internet still managed to accurately guess the location for options to block or report on Facebook upon probing.
3.1.6 Facilitators to dealing with misinformation
While participants did not take further action after encountering news they did not trust, they nonetheless stated that they would not engage further or share this information. Participants were primarily motivated to take action if they felt a personal connection to the news, for example, if it impacted their children.
Factors that participants highlighted could improve their ability to take additional actions upon perceiving misinformation included:
- Family and friends as trusted messengers and source of support: Participants highlighted asking their children or others in their networks for support - for example, to learn digital skills like reporting or blocking individuals. Participants also were more likely to mistrust news sources highlighted as unreliable by their friends and family.
- Consistent and simple choice architecture: The process for reporting or taking other actions against fake news being simple, easy, and signposted well was seen as a facilitator to taking action. This included these being visually appealing and standing out in their social media news feeds.
3.2 Journey 2: Encountering hateful or abusive content
RQ1. How do individuals perceive the identified moments of need? Participants were often quite sure when they had (and hadn’t) spotted hateful or abusive content online. Participants noted “cruel” content in their feeds and discrimination online following specific social and political events, for example racism. These were seen as distinct from other kinds of negative content, such as online rants, videos of people fighting, road rage, and celebrity tragedy stories.
**RQ2. How do individuals respond to the identified moments of need? What actions, if any, do they take? Why and how? ** The key behavioural response to seeing hateful or abusive content was to scroll past. This was because participants did not feel personally affected by the content and did not want to linger on it. One participant viewed those who report content as “boring [people]” in need of a “hobby”. Others reported being hesitant to take action beyond hiding or unfollowing an account due to potential consequences.
If they do take any action, what are these? The key actions suggested were to unfollow pages or to hide particular kinds of content. Calling out or reporting content was a rarer response, and participants indicated they had negative experiences with the process, which would put them off doing this in the future.
Key barriers to participants taking action included:
- Lack of knowledge of available tools.
- Not being personally affected by the hate or abuse, so feeling somewhat indifferent.
- Negative view of reporting content, often out of concern about further involvement.
- Negative experience reporting content in the past.
Key facilitators are social or family networks and consistent choice architecture across social media platforms, such as clicking on the ‘three dots’ menu to see post-level actions.
In this journey, we asked participants about the following:
- Whether they knew how to manage the content they see online
- Their experiences managing the content they see online, including reporting abusive and hateful content
- The kinds of abuse and/or hateful content they had seen online, if they had seen this content
- The actions they have taken when they see abusive or hateful content online, if they had taken any actions, or their hypothetical actions when we showed them an example of online hateful or abusive content
We were guided by the following idealised user journey. However, as noted previously, these user journeys should be understood as aspirational models that presume a highly engaged, well-resourced, and motivated user, and are not intended to serve as definitive or universally applicable solutions.
Figure 3. Idealised user-journey upon encountering hateful or abusive content online
3.2.1 Profile of participants
The same set of participants were interviewed on both journey 1 and 2 (see section [3.1.1].
3.2.2 Managing content online
The key response to seeing hateful or abusive content online was to scroll past it.
Participants were often unsure how to manage content, accepted that they had no control over what their online friends post, or did not think about blocking content when seeing something hate or abuse online. Participants said they would consult other sources about how to manage unwanted online content, such as family, friends or Google.
Online choice architecture was an important determinant of online experiences.
Participants who had taken action to manage online content generally referred to clicking on the ‘three dots’ menu to access their options. When simple rules of thumb like this existed, it made it easier for some individuals - even those less digitally-included - to manage the content they saw. However, some participants still identified differences between the platforms, either in the sense that it was easier for them to manage content on one platform over another, or because they liked certain content control features which were only available on certain platforms, for example TikTok’s feature enabling users to proactively choose the kind of content they would like to see.
“I [know how to] block people […] only on [Facebook] messenger […] There are three dots and you press it and it says ‘block’. Something like that. I wouldn’t know how to do it on Facebook, I would ask someone to do it for me.” Medium digitally-included participant, 46 - 60 years
On the whole, the limited actions taken by participants to manage the content they saw online was reflective of the low digital inclusion of our sample.
3.2.3 Encounters with hateful or abusive content online
Participants reported wide-ranging experiences with hateful and abusive content.
Some participants said that they had seen negative content online which they described as distressing but not abusive, for example online rants, videos of people fighting, road rage, celebrity tragedy, and negative comments underneath posts. Others identified targeted harassment of individuals or groups such as sexist and racist content and sexual content sent to minors. Some participants (particularly participants identifying as black) reported seeing hateful content following significant political or social events, for example the Black Lives Matter protests.
3.2.4 Responses to hateful or abusive content online
Participants who had seen hateful or abusive content online reported responding in the following ways:
- Indifference and inaction: The main response to seeing hateful or abusive content online was to scroll past it, unless the content affected them personally.
“I just don’t tend to get involved with any of [those kinds of posts]. It’s not my scene to be honest with you. There’s enough going on in the world without all the ugliness that people are spouting about so I don’t get involved with it […] if there’s anything like that that’s being said on any of the sites, I just tend to scroll past it.”Low digitally-included participant, 46 - 60 years
- Managing content and friendships: Some participants chose to unfriend or unfollow hateful or abusive sources of online content or to hide content. Reasons for this included wanting to avoid reputational damage from association and wanting to avoid confrontation by muting or unfollowing instead of messaging someone about what they had posted. Participants had learned to do this from family and friends, search engines or by chance, by happening to click on the ‘three dots’ menu.
- Responding to content: Instances of actively confronting hateful or abusive content in the past were rare. Participants who had done this were often put off doing so again due to receiving backlash, for example for a spelling mistake. Others preferred private messages to avoid exposure, citing concerns about their profile being viewed by strangers.
“I wouldn’t respond […] There have been links sent to me regarding something that might be racist or prejudiced in some way and people comment underneath it and it just snowballs. You read through it and you just think ‘why would you ever comment’ because there’s some ignorant people out there and you’re never gonna change anybody’s mind […] You get involved in these squabbles online with some anonymous person, and just, what’s the point? It’s not going to change anything. It’s pointless.” Low digitally included participant, 31 - 45 years
- Reporting to platforms: Some participants had reported content to social media platforms in the past, for example by clicking on the ‘three dots’ menu. However, some participants felt discouraged from engaging in this because of platforms’ unclear and time-consuming reporting procedures.
- Reporting to the police: This course of action was not reported by participants of this research. Two opinions emerged: participants either said that they would only involve the police if they encountered a serious threat to them personally, whereas others felt there was no point involving the police because they would not have the resources to intervene.
3.2.5 Barriers to responding to hateful and abusive content online
According to DSIT’s idealised user-journey (Figure 3), upon encountering hateful and abusive content, individuals should
- Seek out resources from gov.uk and/or reputable civil society organisations.
- Determine whether the content is illegal.
- Block, mute, report, or unfollow the account disseminating the content.
While these steps reflect an ideal response, they may not always feel realistic or manageable for all users, and should be seen as illustrative guidance rather than definitive solutions.
Our participants revealed four reasons why they struggled to take these types of actions in response to hateful or abusive content online.
- Taking action viewed as unnecessary or even negatively. Some participants indicated that they wouldn’t feel the need to mute or block an account if the content did not affect them directly. Another viewpoint saw taking action negatively, suggesting that people who report content were “boring” and in need of a hobby.
- Worry about backlash. Participants feared that they would themselves be attacked or were concerned about some form of further involvement. There was a fear that reporting content could have adverse consequences outside of digital platforms.
“Some of these people [who post abusive or hateful content] are very sick buggers […] you never know, they could come to your work.” Medium digitally-included participant, 61+ years
-
Dissatisfaction after reporting content in the past. While not all participants in our sample had reported hateful or abusive content specifically, some had experience reporting content for different reasons such as account hacking. In line with recent evidence that only a fraction of individuals are satisfied with the actions taken in response to reporting content to platforms,[footnote 16] participants of this research described:
- Feeling they were speaking to a robot and could not get to speak to a human.
- Not getting a response from the social media platform or knowing the outcome of their report.
- Disagreeing with the decision of the platform, where it had been communicated. Participants described their reports being rejected, usually because the reported content was not deemed to have violated community guidelines.
“I tend to not bother [reporting] now because the few times I’ve done it…it’s not quick. You need to put a fair bit of explanation in and jump through a few hoops to even find it sometimes, where you can report [content]. So by the time you’ve done it, someone else would have probably done it. […] I’m not the most tech-savvy person anyway so if it’s not immediately obvious you’d do it, I end up having to Google ‘how do I report this on Facebook.’ It doesn’t seem like it’s glaringly obvious where you can do that.” Low digitally-included participant, 46 - 60 years
“I just thought it was a generic reply [to my reporting] and they hadn’t even looked at it because, as I said, I don’t know if they are robots or what because to me it’s not very human […] I don’t have an awful lot of confidence in who is back-of-house with Facebook.” Medium digitally-included participant, 61+ years
- Lack of knowledge of available tools and actions. Some participants said that it was not obvious how to report content to online platforms. Less digitally-included participants indicated that it was challenging to apply instructions they had been given, for example from Google, on to the platform, implying that it may not be enough to simply provide information for them to be able to perform new behaviours online.
According to participants, these experiences would dissuade them from reporting content in the future. Instances where harmless content had been taken down further reduced the legitimacy and perceived usefulness of reporting in participants’ eyes. However, it may be that an availability bias is at play, whereby participants distrusted social media reporting mechanisms based on a single, memorable negative experience, or the negative experience of someone they know.
3.2.6 Facilitators dealing with hateful or abusive content online
Two main facilitators to dealing with hateful and abusive content emerged during interviews:
- Family and friends as a source of support. The main place participants said they would turn to or have turned to is their friends and family. In particular, participants said that they went to their children for this support, though some other participants mentioned friends, partners, or other (younger) relatives. This suggests that interventions which seek to improve the digital literacy skills of adults via their social and family networks could be particularly promising.
- Consistent choice architecture across platforms. Within the platforms themselves, participants who reported having taken some action in response to online abuse or hate commonly mentioned clicking on the three dots menu to see their options. Because Facebook, Instagram and X all use this format for content control and reporting, prompting users to look at the ‘three dots’ can be a useful heuristic, particularly for less digitally included participants.
3.3 Journey 3: Encountering risks to children’s online safety
RQ1. How do individuals perceive the identified moments of need? There are several ways participants became aware that their child may have been experiencing or had experienced something negative online. These included their children showing them potentially unsafe content they had encountered, parents hearing about the negative experiences of other children (e.g. through the media, local events, other parents or their child’s school), as well as communications from the school either to parents or through online safety lessons their child had at school.
RQ2. How do individuals respond to the identified moments of need? What actions, if any, do they take? Why and how? Participants responded to these moments of need by having discussions with their children, either on an ongoing or ad hoc basis. High digitally-included participants were more likely to have regular conversations across a range of topics compared to low digitally-included participants who generally were more reactive to specific experiences or triggers in their child[ren]’s lives. Parents undertook further research if they felt that their existing knowledge was insufficient. Parents also said they would report content on the platform on which their child had the negative experience. Particularly in cases of bullying, parents would speak to their child’s school.
Key barriers to parents taking action include:
- Complicated and inconsistent online choice architecture: Participants struggled to navigate the various security settings across different platforms.
- Limited knowledge and digital skills: Participants who indicated they did not know the range of online risks for their children and the tools and strategies available to protect them.
- Limited cognitive capacity and resources: Participants felt overwhelmed by the risks present to their children, and/or had limited time and capacity to engage proactively with available tools and resources.
- Challenging conversations, particularly with older children: These conversations could be difficult and a source of conflict with their children.
- Child[ren]’s desire for greater privacy and their ability to contravene restrictions: Older children sought greater privacy and independence in their internet use. With their typically advanced digital skills, they were also capable of getting around parental restrictions and content controls.
Participants who were low digitally-included particularly struggled with these barriers, indicating that parental digital inclusion was a factor affecting their ability to help to keep their children safe online.
Key facilitators to taking action included having access to support and resources from school, online websites and groups, and personal social networks, as well as parents having more trusting and open relationships with their children.
In this journey, we asked participants about the following:
- Their and their children’s online activity and how this is monitored
- Their conversations with their children on online safety
- What actions they had taken or had not taken when their children encountered risky content online
- Resources and information on children’s online safety
We were guided by the following idealised user journey. However, as noted previously, these user journeys should be understood as aspirational models that presume a highly engaged, well-resourced, and motivated user, and are not intended to serve as definitive or universally applicable solutions.
Figure 4. Idealised user-journey upon a child encountering risky content online.
3.3.1 Profile of participants and their children There were significant differences between the online activities of participants and their children, with some low digitally-included participants not knowing what their children were doing online.
Participants in our sample reported using the internet for diverse reasons including online banking, shopping, and social media. One group of participants typically used at least one social media app such as Facebook, Instagram, Twitter (X), Threads, as well as WhatsApp, while another said they do not use social media themselves.
In contrast, both sets of participants described their children as heavy internet users. They described their children as being “on the internet all the time”. Participants described children accessing social media (Instagram, TikTok, Snapchat, Discord, YouTube, and WhatsApp), online games (PS5, Minecraft, Roblox, etc.), and platforms for school work.
Levels of digital inclusion seem to impact participants’ knowledge of the platforms their children are using: some participants who had low digital inclusion levels reported that they did not know how to use the services that their children used (for example, Instagram or Snapchat), or were not very aware of what exactly their children were using or doing online. In comparison, high digitally-included participants indicated they knew more about the platforms their children were using, including restricting the ability for their children to download or access new platforms without their permission or approval.
3.3.2 Monitoring and managing children’s online activity Less digitally-included participants found it particularly hard to monitor and manage the content their children saw online.
Participants from across the sample recognised the need for some kind of protection for their children from online harms. Participants deployed a range of strategies to help to protect their children online:
Monitoring their children’s privacy: Asking their children to make their accounts private, not use their first names, and share platform profiles with them (for example, YouTube and Instagram). Setting rules with their children: Setting rules such as only allowing their children to speak to people on video games that they know, or only speaking to friends and family on WhatsApp. Monitoring use: Monitoring the content their children saw or interacted with using a range of strategies: Only allowing younger children to use their devices in common areas and monitoring them. This included using shared devices, for example, the Smart TV to play video games or watch YouTube. This was noted as becoming harder for older children as they started to use devices more independently. “You can’t sit next to them [older children] all the time, watching what they’re doing. You have to trust them to a degree.” High digitally-included parent to a teenager and young adult
Checking their children’s browsing history, WhatsApp or emails. Some participants, however, did not know how to do this for every platform (for example, Instagram). Participants also said their children could be cagey about this, one spoke about their son potentially deleting things off his phone before letting them check. Restrictions on content: Participants from across the sample, but particularly high digitally-included participants, reported using content blockers at a router level via their internet provider to block criminal, violent, sexual, and gambling content. Other methods include using applications such as the Google Family Link. Participants also reported relying on the platform’s own labelling, such as by using YouTube Kids, or only letting their children watch content that has age-appropriate labelling.
However, challenges were experienced with content filtering and restrictions. For example, one participant felt that TikTok and Instagram’s controls were inferior to Facebook’s because Facebook allows control over who you are friends with. Participants also noted their irritation with router level controls because they also restrict the content parents can access. They also highlighted that their children can figure out ways around them, for example, by accessing platforms using mobile data or a hotspot. Restrictions on downloads and purchases: Preventing children from downloading new apps or using new platforms without explicit parental permission. Some participants would check reviews and the app store description before allowing their children to use a platform. However, participants did experience challenges with this: one participant reported she had previously had problems with her son spending money on her card without getting prior permission. Timing restrictions: Imposed night time restrictions either via household rules by confiscating children’s devices at a set time, or by blocking children’s devices after a certain time via the router. Participants employed one or more of these strategies, but emphasised their importance for younger children more than older ones. Participants spoke of removing router restrictions, allowing children to independently access and download apps, or reducing checking or monitoring of devices after hitting a certain age such as 16 years or starting secondary school. This is because participants viewed that at these ages their children could be trusted to know how to stay safe online independently.
High digitally-included participants seemed to use a wider range of strategies including router level controls, app store restrictions, explicit rules on social media, blocking certain programmes, and parental controls on TV. Low digitally-included participants had less awareness and knowledge of all of these strategies, or tried and found it very difficult. One participant had even asked their older daughter to help setting up controls for her younger daughter.
“I think they used to be quite straightforward and then as time goes by the security gets more. They even make it hard for you to secure your own devices. So with technology it gets a bit more confusing. I think that’s my problem. When I do try, it’s not just a simple turn on and off. Sometimes there’s more to it, there’s like three different things that you can kind of do and enable.” Low digitally-included parent to young adult and teenager
3.3.3 Conversations on online activity and safety
High digitally-included participants reported having a broader range of conversations with their children about online safety, primarily because these participants had a greater awareness of the threats children face online.
Participants described having conversations across a variety of topics with their children, such as:
-
Their children’s behaviour online, including group behaviour online, and its real-world consequences: Participants were keen to ensure their children understood the real-life consequences of their online actions. Examples of this included:
- The risks of online shopping and financial scams.
- Not sharing inappropriate content of themselves with others online, being mindful of the content they share, and its longevity online, and how it might be perceived in the future.
- Risks and experiences of online bullying within group chats or social media, including conversations where their children were involved in bullying others.
- Not sharing personal information online.
-
Inappropriate content they might encounter online: Participants spoke to their children about avoiding inappropriate online content. Examples of this included:
- Not trusting or believing everything they read or see online.
- Inappropriate content online (particularly for younger children) - for example, violent content, swearing or obscenities, racist content or hate speech, sexual content, etc. YouTube and Twitter/X were particularly flagged as being rife with sexual and/or pornographic content.
- Misleading and toxic content, including encouragement of dangerous behaviours. For example, one participant spoke of a viral challenge encouraging suicide (the “blue whale” challenge) which he had had to warn his child about.
-
Interacting with others online, including strangers: Participants highlighted that one of their key concerns was the risk of grooming and their children having conversations with strangers online. Examples of this included:
- Only interacting with real life friends online.
- Not speaking to strangers, and that individuals may fake their identities online.
- Risks of grooming, paedophilia, and inappropriate messages from strangers.
- Being careful on group chats, including discussions on body positivity, consent, etc.
Overall, participants with higher levels of digital inclusion appeared to have conversations with their children across a wider range of these topics compared to low digitally-included participants. They had a clearer sense of what ‘inappropriate’ content could look like. Low digitally-included participants, on the other hand, mostly spoke about the risks of interacting with strangers, grooming, and sexual or violent content.
“We do talk about [the internet] a lot. We’re a very open family. We talk about everything. So I will say to the girls if they’re watching YouTube and there’s anything on there… I tell them what to look out for. I tell them to come off it, pass the tablet to me […] We have a really good relationship in that respect. They know what’s not right. We talk about people trying to contact them online, we explain about grooming, in a child friendly way obviously, but they understand that if anyone was to ever contact them online, they need to highlight it to us immediately. They understand that it’s dangerous and it’s not good for them.” High digitally-included parent to pre-teens
The age of the children also mattered: participants who had younger children generally spoke about a wider range of online risks than parents with older children, who they had fewer conversations with currently and in the past. We hypothesise that this may be because young people used fewer online platforms in the past, for example, one participant spoke about how his older, young adult son grew up with less internet influence than his younger, teenage son.
3.3.4 Format and approach of conversations
Conversations were more likely to happen on an ad hoc basis rather than regularly.
Participants described that conversations about online safety tended to happen in passing, for example, while having dinner or in the car. Conversations also took place when children showed their parents content, for example, TikTok or YouTube videos.
High digitally-included parents appeared to achieve more interaction with their children when discussing online harms with them. They described explaining their reasoning for certain decisions based on their lived experience or existing knowledge bases, answering questions their children might have. Some parents flagged that they would search for relevant information if they did not know how to answer these questions. We discuss these sources of information in section [3.3.8].
Participants spoke about the importance of creating a non-judgmental space and ensuring their children felt comfortable approaching them about these topics. Trust was consistently emphasised as being crucial. Participants hoped their children trusted them enough to have open conversations. Providing an explanation for their decisions was integral to this.
Some participants’ approach to these conversations differed based on their gender identity: some fathers described these conversations as being ‘casual’ and not “long, deep conversations”. They suggested that their child’s mother was more likely to be the ‘disciplinarian’ in the household – taking stricter measures, undertaking a bulk of these conversations, or being the first point of contact for their children.
Some less digitally-included parents again reported greater difficulty with these conversations. One participant described these as “shouty conversations”, for example, when her son has bought something on the PlayStation Store which she didn’t agree to.
Case illustration 4: Parenting challenges
“Sara” is a single mother to a 13 year-old boy and a 10 year-old girl. She is low digitally-included, using the internet to access social media and shop online. Her children are online often, using a range of social media and gaming platforms.
Sara struggles to monitor and manage her children’s online behaviour. While certain rules are in place, she can find them hard to enforce due to her arriving back from work after her children get home from school.
She has not performed her own research on implementing parental controls and lacks support from others.
Often, conversations between Sara and her children about internet use and online safety end in an argument. As a result of these arguments, Sara sometimes chooses to take sudden actions like disconnecting her children’s devices from the internet using an app on her phone. More productive conversations, such as on sharing explicit images, generally happen in passing, such as in the car or when her children show her something they have seen on social media.
“To be honest we don’t have the conversation in a calm way […] they’ll have a melt down and I’ll be at boiling point.” Low digitally-included parent to teenager and pre-teen
Overall, participants indicated that their children responded positively to these conversations, albeit with some degree of embarrassment (for example, on topics of sex or pornographic content), although they flagged that these conversations become harder as children grow up. Participants had more frequent conversations when their children were younger, with conversations tapering off as children become older (generally starting secondary school), as participants trusted that they would bring up any negative experiences.
3.3.5 Triggers or motivations for conversations
High digitally-included participants reported regular conversations with their children on online safety, whereas low digitally-included participants described conversations being triggered more by specific dangers or incidents.
For both sets of participants, conversations with their children were also prompted by other events or circumstances. These included:
- Schools as trusted messengers: Participants noted that communication from schools, for example, warnings about paedophilia on TikTok, triggered conversations at home. Some schools were reported as teaching online safety to children, which then prompted conversations at home. However, not all participants engaged meaningfully with these school communications, as detailed in section [3.3.7].
- Salient negative events involving other children: Reading news on other children’s negative experiences online triggered conversations. For example, one parent had a conversation with their child[ren] after seeing news about a child meeting a stranger they had spoken to online. Conversations had also been triggered by local or school events, such as another child being bullied online or sharing inappropriate content online, with participants checking in with their own children about their experiences.
“Online safety…this is going to sound really bad but only when something has happened in the media, then I’d say something like ‘be careful’…” Medium digitally-included parent to young adult and teenagers
- Their child[ren]’s exposure to inappropriate content: Children showing their parents inappropriate content also triggered conversations, as did the inappropriate content their children were exposed to through friends. For example, one participant said she corrected misinformation, conspiracy theories, and inappropriate language her child was exposed to by her classmates who were less supervised by their parents when using the internet.
- Life events: Participants described conversations prompted by specific life events or milestones their children were experiencing, such as first using the internet or a phone or walking to school alone. Cohort experiences had similar effects, for example, one participant started having conversations about online safety when her daughter started secondary school and experienced peer pressure around getting a social media account.
3.3.6 Responses to children encountering risky content online
As well as conversations with their children, participants spoke about the following actions upon realising their child[ren] were encountering risky content:
- Further research: Participants highlighted that they would seek out additional information if they felt their existing knowledge base was inadequate to address their child’s situation.
- Reported content: Some participants had tried using the reporting function on platforms when their child encountered inappropriate content, but as highlighted with the previous two journeys, they generally had poor experiences with this, for example, feeling like their complaint was ignored or receiving an unsatisfactory response from the platform.
- Reviewing and adjusting settings: Participants spoke about checking their children’s privacy or platform settings after negative experiences, such as disabling chat functionality on a video game platform.
- Engaged with the school: Particularly with cases of bullying, participants spoke about bringing up the issue with their child[ren]’s school. However, participants had mixed experiences here, with some schools offering support while others did not.
Case illustration 5: Confident parenting
“Aoife” is a single mother to a daughter who has just turned 18. She uses the internet every day for work, as well as personal needs and entertainment. Her daughter, similarly, is online regularly, using the internet for school purposes, to connect with her friends via social media, and to consume content based on her interests.
Aoife has experience using various parental controls. She had to do a lot of independent research to find many of these parental controls - she feels like companies started advertising them only recently. Until her daughter was 9, they would use the internet together; until she was 12, her daughter had a kids-only tablet and Aoife would monitor her content. After 12, her daughter could access more platforms but there were still blockers in place that applied to the whole household until nighttime (restricting violent, criminal, gambling, and pornographic content). Only after her daughter turned 16 did Aoife remove these restrictions and allow her daughter on social media. Her daughter independently uses the internet now.
Aoife has been having conversations about the internet with her daughter for many years, right from when she started using the internet. They covered a range of topics: trolling, racism, negative comments, misinformation, body positivity, consent, relationships, sexual and other inappropriate content. Aoife described these as very “conversational”, involving a lot of back-and-forth between them. Her daughter would ask questions and raise concerns; if Aoife did not know the answer, she would do her own research and come back to her daughter later. While Aoife started the conversations when her daughter was younger, now it’s a mix, with her daughter approaching her with questions or when she’s had a bad experience. They share a very open relationship.
Even now that her daughter is older, they still continue to have conversations on these subjects. Aoife believes that conversations like this never really stop, they keep going.
She feels safe with me, and confident. She talks openly, I talk openly. If we have any problem, we can solve it […] We talk about everything at home. As far as I know, I’m not a judgmental parent - that’s the reason she always comes to me and talks to me.” High digitally-included parent to a teenager
3.3.7 Barriers to responding to child[ren]’s encounters with risky online content
According to DSIT’s idealised user-journey, upon identifying that their child[ren] are encountering risky content online, individuals should:
- Identify trustworthy and reliable resources that will enable them to respond.
- Build a bank of these resources, and gain any additional skills or knowledge from other resources like school.
- Develop a range of methods to support their children, relying on their bank of resources as required.
While these steps reflect an ideal response, they may not always feel realistic or manageable for all users, and should be seen as illustrative guidance rather than definitive solutions.
Similar to the previous two journeys, participants highlighted a few key reasons behind their difficulty taking these types of actions upon their child encountering negative online content.
- Complicated and inconsistent online choice architecture: Participants, particularly those who are low digitally-included found it difficult to navigate the various settings and features of different platforms, for example, setting up appropriate parental controls on apps. This challenge is heightened by the different processes involved across different apps - for example, while a parent may know how to change privacy settings on WhatsApp, they might struggle to replicate the same with Snapchat.
- Limited knowledge and digital skills: This is particularly true for low digitally-included participants who indicated they did not know the range of online risks for their children and the tools and strategies available to protect them. This can also be combined with low self-efficacy where participants also expressed underconfidence in developing these skills.
- Limited cognitive capacity and resources: Participants highlighted that they felt overwhelmed by the risks present to their children, or had limited time and capacity to engage proactively with available tools and resources due to other factors such as work pressures or being a single parent. For example, participants spoke about having limited engagement with school communications due to lack of time. As a result participants, particularly low digitally-included, exhibited a more reactive mindset, responding as and when negative experiences occur rather than using a longer-term, preventative approach.
- Challenging conversations, particularly with older children: Participants explained that these conversations, particularly on more sensitive topics, can be difficult. Low digitally-included participants particularly struggled with this, suggesting that these conversations could produce conflict with their children.
- Child[ren]’s desire for greater privacy and ability to contravene restrictions: Implementing safeguards or restrictions could also be challenging with older children who sought greater privacy and independence in their internet use. They also typically had higher digital skills compared to parents and were capable of getting around parental restrictions and content controls.
“[The challenges are] time, it’s remembering. I work five days a week. So it’s a case of sitting down on an evening but you’ve got other stuff to do. So I suppose taking the time to sit and actually read [communication from the school] properly and that’s probably why I struggle to then follow the guidance because I’m trying to do it quickly. I get distracted eventually. That’s a hurdle straightaway […] But then going online and checking the safety […] I’ve just struggled to understand.” Low digitally-included participant parent to young adult and teenager
Given that participants who were low digitally-included particularly struggled with these barriers, parental digital inclusion or ‘savviness’ appears to be a key factor affecting their ability to engage with children on their online safety.
3.3.8 Facilitators to responding to child[ren]’s encounters with risky online content
When dealing with threats to their child or children’s online safety, participants relied quite strongly on their existing knowledge bases and intuition. However, participants also used the following facilitators to help them take actions:
-
School as a trusted messenger: Some participants said schools provided online safety classes for both parents and kids. Some also received frequent communications and reminders from schools on online safety.
- However, participants (particularly high digitally-included ones) felt well-equipped and did not necessarily require these classes or communications, while others found the level of communication overwhelming or difficult to follow, especially while balancing a full-time job.
- Participants also felt that school guidance should complement, not replace, the foundational advice given at home.
-
Research and online resources: Participants spoke about conducting their own research online, primarily with the help of search engines like Google. They also spoke of using trusted websites such as Trading Standards, government websites, news websites, and Citizens Advice. Other sources of online information included:
- Facebook groups for advice and to gauge average opinions on various online safety issues.
- Some suggested avoiding sources like TikTok and YouTube for reliable information but could consult Medium and YouTube experts for specific topics.
- If necessary, participants would potentially turn to reliable news sources like Sky News or seek government and charity websites, although this was less common.
- Personal knowledge and networks: Participants relied on their own lived experiences and common sense for guidance. They also had conversations with other parents, or friends and family, both informally and within structured groups (for example, WhatsApp groups, in-person meetings, church), which were described as valuable for sharing experiences and gaining insights. However, there is a risk of these personal networks and knowledge bases being biased or incomplete.
- Trusted relationships with their children: Participants with a trusting and open relationship with their children were more likely to learn about the risky content their children were encountering, and be able take steps to address the issue without upsetting or triggering a negative reaction from them. Trusted relationships also aided participants in enforcing parental controls and restrictions where needed.
Case illustration 6: Proactive parenting
“Michael” is a father with a teenage daughter. He uses the internet quite often for both work and personal entertainment. He’s very concerned about his daughter’s online safety, particularly with regards to online grooming and inappropriate sexual content that is easy to stumble onto online. He’s also wary of dangerous viral challenges targeting kids and risks of cyberbullying. He shares an open relationship with his daughter and she approaches him when she comes across this kind of content or interactions online.
As a result, he is quite proactive about his daughter’s online safety. When she approached him about wanting to download a video game that all her friends were playing, he took multiple steps to research the safety of the game. First, he checked the app store to see the description of the game, whether it is verified, and what permissions it required. He then read the reviews on the app store which revealed that the game had a chat function. He then separately did a Google search about the chat function and “straight away” found many parents reporting forums that their children had received inappropriate messages on the video game chat. He then did a bit more research and found out how to disable the chat function. He then allowed his daughter to play the game without the chat function.
The only reason I found that out was through people reporting it on these parent forums. And then I spoke to other friends of mine who had kids of similar ages and said: ‘are you aware of this?’ and half the time they were like ‘no’ […] The following day they’d come back in and be like: ‘oh my god, my kids are receiving messages like this’. Yeah, scary.” High digitally-included parent to a teenager
3.4 Experiences with media literacy resources and support
RQ3. What are the barriers and facilitators to engaging with existing media literacy resources or support upon encountering these moments of need?
Familiarity with existing online resources for online safety was low, particularly those provided by government entities, highlighting a general lack of awareness and engagement.
Feedback on available resources was mixed, with some participants appreciating tools like the SHARE checklist for their comprehensive guidance, while others expressed scepticism about their effectiveness and perceived bureaucratic barriers on government websites.
Barriers to engagement included reluctance to actively search for information, concerns over the complexity and formality of government platforms, and varying levels of confidence in existing knowledge among participants.
Facilitators to engagement included the preference for user-friendly and visually appealing resources, active promotion through trusted channels, and integration into educational and community settings to enhance accessibility and credibility
Across all our interviews (covering all three journeys), we showed some participants a series of existing online resources (both government and non-government) for identifying and taking action when experiencing each moment of need (misinformation, hateful/abusive content, and children’s online safety). Engagement with and awareness of these resources was low – indeed, participants in our sample were not aware that these resources existed, particularly from the government.
“I didn’t even know there was government advice on reporting content.”Low digitally-included participant. 46 - 60 years
We received mixed feedback on the resources shared by DSIT.
Some resources were viewed positively, (for example, the SHARE checklist or DSIT’s support page for hateful or abusive content). Participants appreciated that there was comprehensive guidance and support from the government, seeing these resources as useful references to go to when needed. Participants also valued resources that they could use to support their children.
However, participants also expressed negative feedback. This is summarised below as key barriers to individuals’ engagement with these resources.
3.4.1 Barriers to engagement
The key barrier participants highlighted was that they were unlikely to search for this information themselves, especially if they did not know what or where to find it.
Further, some participants expressed scepticism or hesitation about these resources. For example, one participant was not convinced about the usefulness of the SHARE checklist as it seemed ineffective to target individuals’ behaviours rather than assign responsibility to address the issue of misinformation with social media companies directly. This aligns with evidence that individuals prefer more upstream interventions such as downranking content, early moderation, deplatforming, etc.[footnote 17]
Similarly, participants pointed out that they associate gov.uk websites and layouts with HMRC, work, and benefits, which made these resources off-putting to engage with. For example, one participant stated that the ‘formality’ of a gov.uk page made them feel like that using a resource on this webpage to report content might lead to the involvement of the police or an extensive process which felt like “too much” for a Facebook post. Similarly, another participant suggested that they would not go to the NSPCC (which they associated with reporting child abuse) because that would feel “extreme” and inappropriate for their context.
A group of participants stated that they would not engage with these resources, primarily because they felt like they didn’t need to and that they could rely on their instincts and knowledge. This may well be the case for high digitally-included participants, but could also be indicative of overconfidence. Moreover, low digitally-included participants (which made up a significant proportion of our sample) may not have this knowledge, and therefore could potentially benefit from this information more than they believed.
3.4.2 Improvements to increase engagement and use
Participants wanted resources that were usable, navigable, and easy to search and filter through. Visually appealing and easily digestible text were particularly valued.
Participants emphasised that this information needed to be promoted to people rather than be passively available for those who sought it out. It should come from trusted sources with reputable branding.
To achieve a greater reach (particularly among a less digitally-included audience), participants suggested that these resources be more actively signposted as well as circulated offline. Options for getting the information to them include:
- Via post or email from trusted sources like the government
- Adverts on TV, radio, social media as well as in public spaces like post offices or bus stops.
- Links in key locations on social media pages, such as when new users create an account, log-in pages, help pages, and as pop-ups on individuals’ news feeds.
- Via online or in person courses. Face-to-face support was highlighted as an important need for older parents who are less digitally connected or literate.
- Using different formats, like short-form videos.
- Via schools.
Participants suggested that trusted figures could also be useful for disseminating resources, for example, Martin Lewis, Good Morning Britain, or Sky News.
Conclusion
This report is a key step to understanding how people perceive specific moments when they may need media literacy support, and what they do when they encounter these moments. Across three moments of need – when people encounter misinformation online, when people encounter hateful or abusive content online, or when parents encounter challenges relating to keeping their children safe online – we have identified key barriers and facilitators to taking actions and engaging with media literacy support.
A key implication of this research is that adults and children are regularly exposed to an overwhelming volume of online content and information - both benign and harmful – that hinders their ability to proactively take action to protect and educate themselves. Upstream, platform-level interventions such as downranking harmful content to reduce its circulation may be most effective to prevent harm - and research shows that individuals strongly support these measures as well.[footnote 18] This should be done in conjunction with downstream interventions, particularly those empowering people to identify when a moment of need requires a more active rather than passive response, and equipping them with the skills to take these steps. Key recommendations which emerge from this research include:
- The need to develop resources that address the identified skills gap for parents with low levels of digital inclusion in what to do when encountering challenges to their children’s online safety. This research also indicates the need for further research into identifying skills and capabilities gaps for other priority groups.
- Leveraging social networks such as friends and family both in media literacy interventions as well as in messaging and communications framing. Many participants mentioned the role that social networks play in getting support and information about media literacy, which should be leveraged by organisations who work with media literacy both in programme structure (for example, through peer champions or referrals) and in messaging (for example, campaigns that encourage people to talk about media literacy with their relatives who may be less media literate).
- Promoting the use of consistent choice architecture across online platforms for key media literacy actions, such as reporting inappropriate content or misinformation - for example, having a consistent ‘three dots’ menu. Making it easier to report content - and providing feedback loops that let those who’ve reported content know what happened - can also likely encourage reporting.
As the online landscape continues to shift and evolve, particularly with the increased popularity and usage of new technologies such as Generative AI, it is crucial that behaviourally-informed efforts are taken to ensure people’s media literacy skills, knowledge, and capabilities grow as well.
Appendix
Sampling Matrix
Journeys 1 & 2 (n = 23)
Primary sampling criteria | Target N | ||
---|---|---|---|
Financial Wellbeing | Socioeconomic status | Managerial, administrative and professional occupations | 6 |
Intermediate occupations | 11 | ||
Routine and manual occupations | 4 | ||
Student | 2 | ||
Age | Age ranges | 18-30 | 5 |
31-45 | 7 | ||
46-60 | 6 | ||
61+ | 4 | ||
Undisclosed | 1 | ||
Digital inclusion (proxy measure for digital literacy) | Determined using Ofcom’s measure of digital inclusion [footnote 19] | High digital inclusion (10+ activities) | 3 |
Medium digital inclusion[footnote 20] (5 - 9 activities) | 6 | ||
Low digital inclusion (1 - 4 activities) | 14 | ||
Disability | Both physical disability and/or neurodivergence | Disabled | 2 |
Secondary sampling criteria | Target N | ||
Ethnicity | Using ONS standard question | Non-white | 7 |
Gender identity | Using ONS standard measure | Men | 10 |
Women | 13 | ||
Region | Standard geographic regions in the UK | North-West | 19 |
North-East | 1 | ||
South-West | 1 | ||
West Midlands | 1 | ||
East Midlands | 1 | ||
Rural / Urban | Urban | 5 | |
Suburban | 8 | ||
Rural | 8 | ||
Undisclosed | 2 |
Journey 3 (n = 10)
Primary sampling criteria | Target N | ||
---|---|---|---|
Parents | Based on age of at least 1 child | Of young children (8 - 12 years) | 3 |
Of teenagers (13-18 years) | 5 | ||
Of young adults (19 - 21 years) | 2 | ||
Financial Wellbeing | Socioeconomic status | Managerial, administrative and professional occupations | 3 |
Intermediate occupations | 4 | ||
Routine and manual occupations | 3 | ||
Digital inclusion (proxy measure for digital literacy) | Determined using Ofcom’s measure of digital inclusion[footnote 21] | High digital inclusion (10+ activities) | 4 |
Medium digital inclusion[footnote 22] (5 - 9 activities) | 4 | ||
Low digital inclusion (1 - 4 activities) | 2 | ||
Secondary sampling criteria | Target N | ||
Ethnicity | Using ONS standard question | Non-white | 6 |
Gender identity | Using ONS standard measure | Men | 3 |
Women | 7 | ||
Region | Standard geographic regions in the UK Or Rural / Urban | South-West | 1 |
South-East | 1 | ||
West Midlands | 2 | ||
East England | 2 | ||
London | 4 | ||
Rural / Urban | Urban | 7 | |
Suburban | 1 | ||
Rural | 2 |
References
- BIT, Yeoman, F., & Yates, S. (2023). Media literacy uptake among ‘hard to reach’ citizens. Department for Science, Innovation and Technology. https://assets.publishing.service.gov.uk/media/6511619206e1ca000d616116/media_literacy_uptake_among_hard_to_reach_citizens.pdf
- BIT (2024). EAST: Four simple ways to apply behavioural insights. Available at:https://www.bi.team/publications/east-four-simple-ways-to-apply-behavioural-insights/.
- DCMS (2021). Online media literacy strategy. Available at: https://assets.publishing.service.gov.uk/media/60f6a632d3bf7f56867df4e1/DCMS_Media_Literacy_Report_Roll_Out_Accessible_PDF.pdf
- Enock, F. E., Bright, J., Stevens, F., Johansson, P., & Margetts, H. Z. (2024, May). How do people protect themselves against online misinformation? Attitudes, experiences and uptake of interventions amongst the UK adult population. The Alan Turing Institute.
- Ofcom. (2023). Children and parents: Media use and attitudes. https://www.ofcom.org.uk/media-use-and-attitudes/media-habits-children/children-and-parents-media-use-and-attitudes-report-2024/?__cf_chl_tk=9IF.AoC.SJULszifhDPCC9nL2JVBEviBiykl_8U7fJE-1736126673-1.0.1.1-r54eiHb4RMgCDj4OeomMh8WluRrPUP0aAjEPLu.K_jw
-
BIT, Yeoman, F., & Yates, S. (2023). Media literacy uptake among ‘hard to reach’ citizens. Department for Science, Innovation and Technology. Available at: here ↩
-
DCMS (2021). Online media literacy strategy. ↩
-
BIT, Yeoman, F., & Yates, S. (2023). Media literacy uptake among ‘hard to reach’ citizens. Department for Science, Innovation and Technology. ↩
-
This aligns with the concept of ‘timely moments’ within behaviour science – targeting people at these specific moments is more likely to encourage behaviour change. ↩
-
This was determined using Ofcom’s measure of digital inclusion. We used the number of activities people do online, such as online banking, searching for information, watching TV programmes/ films/ content online, as a proxy measure for their digital literacy, given that many measures of digital literacy and inclusion often include a measure of everyday digital skills. ↩
-
Enock, F. E., Bright, J., Stevens, F., Johansson, P., & Margetts, H. Z. (2024, May). How do people protect themselves against online misinformation? Attitudes, experiences and uptake of interventions amongst the UK adult population. The Alan Turing Institute.; Behavioural Insights Team. (2024, October 1). Platform-level interventions to reduce the spread of misinformation & hateful content online. ↩
-
DCMS (2021). Online media literacy strategy. Available here ↩
-
DCMS (2021). Online media literacy strategy. ↩
-
Ofcom. (2023). Children and parents: Media use and attitudes. Available here ↩
-
DCMS (2021). Online media literacy strategy. ↩
-
BIT, Yeoman, F., & Yates, S. (2023). Media literacy uptake among ‘hard to reach’ citizens. Department for Science, Innovation and Technology. Available here ↩
-
ibid. ↩
-
BIT (2024). EAST: Four simple ways to apply behavioural insights. Available here. ↩
-
The person could be aware that they encountered the moment of need (perceived) or unaware that they encountered it (not perceived). ↩
-
This was determined using Ofcom’s measure of digital inclusion. We used the number of activities people do online as a proxy measure for their digital literacy, such as online banking, searching for information, watching TV programmes/ films /content online, given that many measures of digital literacy and inclusion often include a measure of everyday digital skills. Digital literacy refers to the practical skills and competencies needed to use digital devices and navigate online content effectively. Both digital inclusion and digital literacy are closely related to media literacy, which we define as the public’s ability to navigate online environments safely and securely. ↩
-
Enock, F. E., Bright, J., Stevens, F., Johansson, P., & Margetts, H. Z. (2024, May). How do people protect themselves against online misinformation? Attitudes, experiences and uptake of interventions amongst the UK adult population. The Alan Turing Institute. ↩
-
Enock, F. E., Bright, J., Stevens, F., Johansson, P., & Margetts, H. Z. (2024, May). How do people protect themselves against online misinformation? Attitudes, experiences and uptake of interventions amongst the UK adult population. The Alan Turing Institute. ↩
-
Enock, F. E., Bright, J., Stevens, F., Johansson, P., & Margetts, H. Z. (2024, May). How do people protect themselves against online misinformation? Attitudes, experiences and uptake of interventions amongst the UK adult population. The Alan Turing Institute.; Behavioural Insights Team. (2024, October 1). Platform-level interventions to reduce the spread of misinformation & hateful content online. https://www.bi.team/publications/platform-level-interventions-to-reduce-the-spread-of-misinformation-hateful-content-online/. ↩
-
We used the number of activities people do online as a proxy measure for their digital literacy, given that many measures of digital literacy and inclusion often include a measure of everyday digital skills. ↩
-
It is likely that some of our participants who reported only performed between 1 - 4 activities were underreporting and are more likely to be medium digitally-included. This is because the question was framed as “Which of the following activities do you regularly do online? Please select all that apply.” Some participants did not engage in particular activities regularly but had engaged in the past / engaged more infrequently (for example, some had deleted their social media accounts and therefore no longer ‘regularly’ used it). ↩
-
We used the number of activities people do online as a proxy measure for their digital literacy, given the paucity of measures for the latter. ↩
-
It is likely that some of our participants who reported only performed between 1 - 4 activities were underreporting and are more likely to be medium digitally included. This is because the question was framed as “Which of the following activities do you regularly do online? Please select all that apply.” Some participants did not engage in particular activities regularly but had engaged in the past / engaged more infrequently (for example, some had deleted their social media accounts and therefore no longer ‘regularly’ used it). ↩