Community and Engagement Survey 2025/26: Pilot Report
Published 13 March 2026
Applies to England
1. Introduction and Background
1.1 Overview of survey aims and objectives
Verian was commissioned to deliver the 2025/26 Community and Engagement Survey (CES) by the Department for Culture, Media and Sport (DCMS) in partnership with the Ministry of Housing, Communities and Local Government (MHCLG). The Community and Engagement Survey aims to explore how people across England engage with their community and with culture, media and live sport and allows respondents’ views to be heard on a range of issues affecting their local area. The Community and Engagement Survey is a combination of two of DCMS’s previous social surveys - the Participation Survey and the Community Life Survey. This practical merge was in response to a public consultation about DCMS’ social surveys run in 2024.
The content for the Community and Engagement Survey has been kept broadly similar to the previous surveys but combining the surveys will result in efficiencies and more coherent outputs. This approach also provides an opportunity to ask a small number of survey questions to the full sample across both question sets. The findings will help Government, charities and other public sector organisations understand how people engage with culture, media and a range of activities, as well as understand more about communities across England. It will help inform and shape policy and decision making. For MHCLG the data will specifically support the evaluation and implementation of local growth funds, inform understanding of devolution, and analysis for communities and cohesion policy development.
The Community and Engagement Survey aims to deliver a nationally representative sample of adults aged 16 and over across England. The data collection model is based on ABOS (Address-Based Online Surveying), a type of ‘push-to-web’ survey method. Respondents take part online or by completing a paper questionnaire and are randomly allocated to complete the Community pathway or Engagement pathway through the questionnaire.
In the 2025/26 survey, DCMS and MHCLG required that the Community pathway produces data that is nationally representative while also delivering reliable statistics at the lower-tier local authority level (approximately 175,000 respondents). Similarly, DCMS required that the Engagement pathway should provide national representation and produce reliable estimates at an ITL2 area level (around 33,000 respondents, including approximately 16,500 from the Participation Survey conducted between April and September 2025).
The survey development and piloting work took place from August to September 2025, with the main stage of fieldwork beginning in October 2025 and continuing to the end of March 2026.
1.2 Introduction to report
This report summarises the development of the Community and Engagement Survey questionnaire and the completion of the pilot study. It is organised into two sections, covering the following areas:
-
Questionnaire Development including discussion of content, cognitive and usability testing, and finalisation of the questionnaire.
-
Survey piloting including discussion of the pilot survey sampling design, methodology, fieldwork, response rates, survey length, device usage and data quality assessment.
2. Questionnaire Development
This section provides an overview of the questionnaire content and questionnaire development process including merging the legacy questionnaires and designing new questions.
2.1 Questionnaire review and development
2.1.1 Merging the Community Life and Participation questionnaires
The new Community and Engagement Survey online questionnaire was designed to take approximately 30 minutes to complete and largely consists of existing questions taken from the Community Life Survey and the Participation Survey. To combine these two questionnaires while maintaining a reasonable length, two distinct pathways were created within the online version. The Community pathway comprised of questions from the Community Life Survey (shown in purple in Figure 2.1) while the Engagement pathway comprised questions from the Participation Survey (shown in red). Each pathway consisted of a set of questions asked of all participants routed to that pathway, along with two or three additional modules asked to a subset of participants. Sample numbers included in Figure 2.1 are target response rates for each module of the survey. The 16,500 interviews for the Engagement pathway will be combined with the same number of interviews from the last two quarters of the Participation Survey (April to September 2025, prior to the launch of the merged survey) to create a sample large enough for the required analysis.
To avoid duplication, overlapping demographic questions were consolidated into two core sets of questions, one asked at the beginning of the survey and the other at the end. These questions were administered to all respondents (shown in blue).
Figure 2.1: The Community and Engagement Survey design
Because this modular approach could not be replicated in a paper format, two separate 24-page paper versions were produced: one featuring Community pathway questions and the other featuring Engagement pathway questions. Both paper versions included the same core questions as the online survey, but modular questions asked to a subset of participants were omitted due to the complexity and cost of implementing a modular design in a paper questionnaire.
2.1.2 Questionnaire changes
In addition to merging the questionnaires it was also necessary to:
-
Review the order and placement of trend questions to minimise the risk of order effects, particularly as there was a desire to include a few Engagement pathway questions within the Community pathway and vice versa.
-
Delete some questions or move some of the questions into a subset module.
The questionnaire for 2025/26 was developed collaboratively to adapt to the needs and interests of both DCMS and MHCLG who partnered with DCMS for this year’s survey. As a result of this, and questionnaire space constraints, questions on the following topics were added or changed for the Community and Engagement Survey 2025/26:
-
Social connectedness; which included questions on types of connection, who they are with, and how they are maintained.
-
Emergency preparedness; the extent to which a respondent believes their neighbourhood would pull together in an emergency or disaster.
-
Strength of belonging to Britain; the return of a question from previous waves of the Community Life survey. Please note that a further two questions were added for quarter 2 of fieldwork to understand reasons for feeling a sense of belonging or not belonging to Britain.
-
Sources of civic engagement; sources of information about ways to engage in civic participation, consultation and/or activism.
-
Local facilities and amenities, including the availability and use of additional facilities related to DCMS sectors, as well as satisfaction with the wider list of facilities visited
-
Three most concerning issues affecting respondents’ local area
-
Art and cultural facilities that young children have been taken to by their parents
-
Awareness of uses of archives, either in person or online
-
Reasons for not being interested in visiting heritage sites or places of historic interest
-
Horse racing gambling; an adjustment in the recording of respondents’ engagement with betting on horse racing
-
Changes in the way educational qualifications are recorded.
The following question topics previously included in the Community Life or the Participation Survey were removed for the Community and Engagement Survey 2025/26:
-
Frequency of social interactions in person or by digital modes of contact.
-
Local area measures on strength of belonging, extent that people pull together, and trust in people.
-
Groups of people engaged with whilst in volunteering roles in the last 12 months.
-
Past roles providing unpaid help to groups, clubs or organisations (formal volunteering), and reasons for stopping these.
-
Availability of local sport to watch and frequency of doing so.
-
Working status of respondent’s partner.
-
Frequency of internet use and ownership of different smart devices.
-
Strength of belonging to the UK.
Figure 2.2 summarises the content of each module of the Community and Engagement Survey.
Figure 2.2: The Community and Engagement Survey topics
2.1.3 The Community and Engagement Survey content
To help guide the development of the new questionnaire, Verian created a questionnaire map to document the content of the survey, the geographic reporting level of each question, which questions are core questions, which are allocated to sub-samples, and any changes to existing questions.
Once the objectives and survey structure were agreed, Verian produced a draft questionnaire for review by DCMS and MHCLG. This started as an iterative review and redrafting process working in close partnership with DCMS and MHCLG until we were confident the questionnaire met the objectives of the survey.
2.1.4 Development of new questions and redesign of existing questions
A technical review of the proposed new questions that were less clearly defined was carried out using Verian’s bespoke Questionnaire Appraisal Framework (QAF). This framework offers a systematic and robust method for evaluating questions against criteria such as clarity, implicit assumptions, task difficulty, sensitivity, and risk of bias. It also helps pre-empt potential mode issues between online and paper before cognitive testing.
Questions were then identified for the first stage of cognitive testing in August-September 2025, which included both new questions and variations on existing questions (section 2.2.1). A second stage of cognitive testing was conducted in November 2025, focusing on questions exploring the reasons for feeling a sense of belonging or not belonging to Britain (section 2.3).
2.2 Cognitive testing: Stage 1
Cognitive testing describes a process of testing survey questions to ensure they work as intended before they are included in the main stage questionnaire. Interviews are carried out by members of the research team who ask participants the new questions and then spend some time discussing their answers. The researcher probes to check how easily the participant can understand the question and to explore how they decide their answer. This provides valuable insight into how participants understand, process and respond to survey questions, and can help refine question wording and response lists.
2.2.1 Stage 1: Questions on Multiple topics
The first stage of cognitive testing was conducted over two rounds, in August and September 2025. The detailed findings from each of these phases were reported back to DCMS separately. After each round, findings were discussed and changes were implemented. Changes from the first round were then re-tested during the second round of testing.
2.2.2 Stage 1: Participant profile
Participants for cognitive testing were recruited using quotas based on age, gender, ethnicity, and education level to ensure broad representation of the general population (Table 2.1). Additional quotas were applied for factors such as the presence of children, whether participants had played online video games in the past 12 months, and whether they had placed an online horse racing bet during the same period. These additional criteria ensured that low-prevalence questions could be tested with a sufficient number of respondents.
All participants for each stage of testing were recruited by Acumen Fieldwork, a supplier of fieldwork recruitment services working on behalf of Verian. Interviews were conducted online.
Table 2.1: Profile of participants for cognitive testing stage 1
| Key factor/demographic | Quota | Round 1 Achieved | Round 2 Achieved |
|---|---|---|---|
| Total | 8 | 6 | |
| Age | 16-34 years old | 2 | 2 |
| 35-54 years old | 4 | 3 | |
| 55+ years old | 2 | 1 | |
| Gender | Male | 4 | 3 |
| Female | 4 | 3 | |
| Educational attainment | Did not complete GCSEs or equivalent | - | - |
| GCSES or equivalent | 2 | 2 | |
| AS/A-Levels or equivalent | 1 | 2 | |
| Vocational Qualification BTEC/GNVQ | 2 | - | |
| Undergraduate Degree | 1 | 2 | |
| Masters/PHD | 2 | - | |
| Ethnicity | Ethnic minority | 3 | 2 |
| Mixed ethnicity | 1 | 2 | |
| White British | 4 | 2 | |
| Age of children | 0-4 | 3 | 2 |
| 5-10 | 2 | 1 | |
| 11-15 | 1 | 1 | |
| No children under 16 | 4 | 1 | |
| Video gaming (In the last 12 months, have you played any video games on a smartphone or tablet?) | Yes | 4 | 5 |
| No | 4 | 1 | |
| Gambling (In the last 12 months have you placed a bet on any horse racing events?) | Yes | 3 | 3 |
| No | 5 | 3 |
2.2.3 Key findings from the cognitive testing stage 1
Overall, 46 questions were included for testing in round one of the cognitive testing and 41 questions were included in round two, with 6 questions being removed a third of the way through to prioritise questions that had received less coverage due to time constraints.
Key findings from rounds 1 and 2 of the cognitive testing are summarised below, with further detail provided in Annex A.
Finding 1: Long lists and multi-layered questions created cognitive burden
Respondents in both rounds struggled with long response lists or lists of statements. This often led to partial reading, skipping options, or selecting only one code. Long answer lists and multi-layered questions increased cognitive burden and confusion.
Breaking complex concepts into simpler, separate questions improved clarity.
Finding 2: Use of ambiguous or hypothetical language led to inconsistent interpretation
Across both rounds, respondents typically struggled with questions that were broad or hypothetical (e.g., EmPrepRec about helping in emergencies, CONNECT grid about “people in your life”). Participants often answered based on general perceptions rather than specific experiences. For hypothetical scenarios, respondents often defaulted to moral norms rather than practical considerations.
Finding 3: Examples and consistent wording improved comprehension
Using clear examples and consistent phrasing across related questions improved understanding. Harmonising codes (e.g., spouse/partner wording) across questions and using consistent phrasing for similar concepts was recommended to reduce cognitive burden and keep the questionnaire cohesive. Using plain language, avoiding jargon (e.g., DLC, loot boxes) or explaining it in brackets also improved comprehension.
Most introductory texts were generally understood in both rounds, though minor refinements were suggested for conciseness.
Finding 4: Emotional sensitivity in belonging to Britain questions
Questions about belonging to Britain were generally understood, but for some participants, particularly those from ethnic minority backgrounds, they evoked an emotional response. It was agreed to do further cognitive testing on these specific questions given their potential sensitivity (section 2.3).
Finding 5: Social desirability bias may have influenced reported behaviour
Across both rounds, questions about helping others or socialising appeared to elicit responses shaped by perceived social expectations rather than actual behaviour.
Further findings from cognitive testing stage 1 are outlined in Annex A.
2.3 Cognitive testing: Stage 2
2.3.1 Stage 2: Questions on reasons for feeling a sense of belonging or not belonging to Britain
The second stage of cognitive testing took place in November 2025 and consisted of two rounds which focused on the strength of belonging to Britain questions. Participants were recruited from the pool of respondents to the first few weeks of the quarter 1 main stage of fieldwork based on their answers to the existing question on strength of belonging to Britain. Table 2.2 shows the profile of participants for stage 2 cognitive testing. Round one interviews took place on the 10th and 11th of November 2025, and round two interviews took place on the 24th and 25th of November 2025. Following the testing, questions were added to the survey for quarter 2, beginning in January 2026.
Further findings from cognitive testing stage 2 are outlined in Annex C.
2.3.2 Stage 2: Participant profile
Table 2.2: Profile of participants for cognitive testing stage 2
| Key factor/ demographic | Quota | Round 1 | Round 2 |
|---|---|---|---|
| Sense of Belonging | Feel they belong | 7 | 2 |
| Don’t feel they belong | 5 | 6 | |
| Ethnicity | White British | 1 | 3 |
| White non-British | 2 | 1 | |
| Asian background | 2 | 0 | |
| Black-heritage background | 1 | 1 | |
| Mixed or multiple backgrounds | 1 | 1 | |
| Other background (Arab, any other) | - | 2 | |
| Gender | Male | 4 | 3 |
| Female | 3 | 5 | |
| Age | 16-34 | 1 | 3 |
| 35-54 | 5 | 2 | |
| 55+ | 1 | 3 | |
| Citizenship | UK | 4 | 4 |
| UK dual | 1 | 2 | |
| Citizens of countries other than UK | 2 | 2 | |
| Religion | Christianity | 3 | 3 |
| Any other than Christianity | 2 | 1 | |
| No religion | 2 | 4 |
NOTE: The sample for Round 2 focused on recruiting participants who felt that they didn’t have a sense of belonging to Britain, as Round 1 revealed that these follow-up questions were more difficult and required more evidence.
2.3.3 Key findings from Cognitive Testing Stage 2
Round 1:
Finding 1: Some respondents’ answers changed from “not very strongly” (code 3) to “fairly strongly” (code 2) when asked how strongly they felt they belonged to Britain
Three participants changed their answer from their original response [code 3] in the survey and their recruitment screener, to code 2 during the cognitive testing. Reasons for switching included feeling “somewhere between the two” or not feeling strongly either way and wanting a more neutral option.
- “I wouldn’t say ‘not very strongly’, but ‘fairly strongly’ feels a bit much given my personal background.”
Those participants who changed their answers during the testing explained wavering between the two codes originally or feeling differently now than they did before when probed on their reasons for changing.
Finding 2: Strength of belonging to Britain derived from two distinct categories: legal/official status vs. sentimental feelings
An overarching theme that emerged during the testing was two interpretations of belonging to Britain or feeling a sense of belonging to Britain; a legal or official status of citizenship (i.e., being born in the country or having a British passport) versus feeling a sentiment of belonging (i.e., having lived here for a long time and/or building a life here).
-
“‘when you live for so long in a country you automatically belong to that country”
-
“I’ve spent my entire independent adult life living in the UK. So, like all of my proper grown-up jobs have been in Britain and like every flat I’ve ever had.”
It is essential to capture both the legal/official and sentimental responses in the final answer codes.
Finding 3: Question 2 (reasons for feeling a sense of belonging to Britain) & Question 3 (reasons for not feeling a sense of belonging) are thought about quite differently
Spontaneous responses given at Question 2 and Question 3 differed quite broadly. Reasons for selecting code 1 or 2 at Question 1 (belong ‘fairly strongly’ or ‘very strongly’ to Britain) included things like being born in Britain or having lived in Britain for years. Reasons for selecting codes 3 and 4 at Question 1 (belonging ‘not very strongly’ and ‘not at all strongly’) focused more on political views or opinions on the current state of the country.
British history, heritage, and politics were not spontaneously mentioned by those who felt that they belonged to Britain, suggesting that this was less relevant for those who feel they belong than those who did not.
Finding 4: Question 4 (And do any of the following contribute to you not feeling a sense of belonging to Britain?) did not work particularly well as a follow up to Question 1 (How strongly do you feel you belong to Britain?)
Respondents struggled to give responses to Question 4 that matched their reasons for saying they did not feel they belonged to Britain at Question 1. They felt that the answer codes were narrow and did not provide enough scope to answer accurately. More details in the Question 4 analysis can be found in Annex C.
Finding 5: Questions 5a and 5b (importance of various factors to being British) produced quite different responses from one another
The multiple-choice aspect of Question 5a paired with the more abstract nature of the question meant that response strategies varied across participants.
The importance scale at Question 5b allowed respondents to provide more nuanced, thoughtful, and in-depth answers.
If added to the main survey, Questions 5a and 5b will likely produce quite different data from one another. While Question 5b might give more depth and accurate data on people’s opinions, the caveat is that it takes longer to answer and is more burdensome for respondents.
Round 2:
Finding 1: Respondents answer – and prefer to answer – thinking about their personal experience or the experience of people in their close community regardless of how the question is worded
Two spontaneous questions were included in the cognitive testing, Question 20 (Please could you say a little about the reasons you think people might feel a sense of belonging to Britain.) and Question 30 (Please could you say a little about the reasons you think people might not feel a sense of belonging to Britain.). Probing at these questions, as well as the different follow-up questions, revealed that participants generally think about themselves and their personal experiences when answering all questions.
Finding 2: As in Round 1, reasons for feeling a sense of belonging to Britain (Question 52) and reasons for not feeling a sense of belonging to Britain (Question 53) were different
Reasons for feeling a sense of belonging focused on community, relationships and being in the country for generations.
Reasons for not feeling a sense of belonging were more varied, particularly between respondents who have lived in Britain their whole lives compared to respondents who have immigrated to Britain.
They focused on one’s values not being represented or addressed, experiences of racism or discrimination, dissatisfaction with the country compared to what it used to be like (such as lack of industry, low income compared to cost of living, difficulties getting appointments with the NHS), nationalism (such as protests and riots that have recently occurred), and general media coverage that has a negative impact.
Finding 3: Similar to Round 1, some respondents’ answers changed from “not at all strongly” (code 4) to “not very strongly” (code 3) when asked how strongly they felt they belonged to Britain
There may be an element of social desirability that leads to participants choosing a less polar response during the in-person interviews compared to what they may choose if taking the survey online.
Participants’ answers may generally be less consistent due to the nature of the subject, as it depends on how people feel in the moment.
2.4 Development of the paper questionnaire
After the web questionnaire was finalised, the process of updating the two paper questionnaires began. This involved identifying questions for removal (in line with the web questionnaire) and maximising the available space to include new questions.
This was an iterative process involving extensive checking by the research team at Verian to ensure the questions were suitable for the mode, and the routing instructions were clear. Verian used a question tracker to keep track of the paper questionnaire changes throughout the process. A breakdown of the process is as follows:
-
Questions were reordered to follow the structure of the online questionnaire as closely as possible.
-
Question removals were made to make space for new questions
Both versions of the questionnaire were tested with members of the public via a round of usability testing.
2.5 Usability testing of the paper questionnaire
2.5.1 Overview
In the context of a self-completion survey, usability testing qualitatively examines how participants interact with the instrument. The interviews focused on layout, presentation, and navigation features, while also identifying any issues with completion. Although the emphasis was on usability, the sessions naturally incorporated elements of cognitive testing, exploring comprehension of question wording and interpretation of key terms.
The questionnaire was presented as an A4 booklet and Verian observed how participants approached and completed it. The interviews comprised two key parts:
-
Part 1: Observation and Think-Aloud
Interviewers observed participants as they worked through the questionnaire, encouraging them to verbalise their thoughts (‘think aloud’) while completing each page. -
Part 2: Probing and Retrospective Feedback
Interviewers probed specific areas of interest after sections were completed. Once the questionnaire was finished, participants were asked follow-up questions about their experience and any issues not addressed during the task.
Verian’s research team prepared a guide to identify specific usability issues and areas of misunderstanding across both versions of the paper questionnaire (Community pathway and Engagement pathway questions). To replicate a realistic survey experience, respondents were given a letter alongside the paper questionnaire and invited to read it for context.
Usability testing was conducted with six participants on September 15th and 16th, 2025. To allow full exploration of how participants approached the booklet, all interviews were conducted in person at Verian’s offices in London. Interviews were approximately an hour in length.
2.5.2 Participant profile
Participants were recruited by Acumen Fieldwork Ltd, a specialist qualitative research recruitment agency.
To ensure the testing reflected the experience of respondents likely to complete the survey on paper rather than online, Acumen specifically recruited individuals who were less technically confident and/or preferred paper-based surveys over digital formats. In addition, to ensure a reasonably broad range of respondents were recruited, Acumen were asked to recruit a mix of respondents based on gender, age, highest educational qualification and ethnicity.
Table 2.3: Profile of participants for usability testing
| Key factor/demographic | Quota | Achieved |
|---|---|---|
| Total | 6 | |
| Age | 16-34 years old | - |
| 35-54 years old | 1 | |
| 55-64 years old | 3 | |
| 65+ | 2 | |
| Gender | Male | 2 |
| Female | 4 | |
| Highest Educational attainment | Did not complete GCSEs or equivalent | - |
| GCSES or equivalent | 2 | |
| AS/A-Levels or equivalent | - | |
| Vocational Qualification BTEC/GNVQ | 1 | |
| Undergraduate Degree | 2 | |
| Masters/PHD | 1 | |
| Ethnicity | Ethnic minority | 2 |
| Mixed ethnicity | - | |
| White British | 4 | |
| Location | Greater London | 6 |
| Confidence using devices (Such as smartphones, tablets, laptops, and computers) | Very confident | - |
| Fairly confident | - | |
| Fairly unconfident | 6 | |
| Very unconfident | - | |
| Preferred survey method | Paper | 5 |
| Online | 1 |
2.5.3 Key findings from the usability testing
The paper questionnaires were generally well received. Key findings are summarised below, with further detail provided in Annex B.
Finding 1: Visuals on the front page were helpful and noticeable
Participants found the visuals and bold text on the front page useful for highlighting important information and improving understanding. Most participants read and followed the instructions provided.
Verian recommended keeping the front page as streamlined and concise as possible, emphasising important text in bold, adding an example image of the bar routing, and using the same front page for both the Community and Engagement pathways questionnaires.
Finding 2: Bar filters were the hardest routing/filtering method to follow
Of the three routing methods used (arrows, boxes, and bars), bars caused the most confusion. Participants noted inconsistency in bar instructions, sometimes they directed respondents where to go and sometimes they did not.
Verian recommended keeping the number of filters (or filtered questions) to a minimum, adding clear and complete instructions to all bar filters where some were felt to be missing, and more generally to keep routing instructions positioned towards the top of the page as far as possible.
Finding 3: Inconsistent response strategies
Respondents varied in how they approached similar filters throughout the questionnaire. Routing instructions (arrows, boxes, and bars) were followed correctly in some places but ignored or misinterpreted in others. Similarly, open text boxes were used inconsistently, sometimes respondents wrote an answer, other times they only crossed the box without writing.
Finding 4: Open text boxes within grids were often missed
Participants frequently overlooked the open text box for the “Other” option in grid-style questions. When probed, respondents said they either thought they had reached the end of the grid or did not realise the box was part of the grid.
Verian recommended updating the formatting for grid open text boxes to ensure they are obvious to users.
Finding 5: The Community pathway through the questionnaire felt long and demanding
Respondents reported noticeable fatigue during the Community pathway through the questionnaire and commented spontaneously on the number of questions. Most required the full hour to complete, and one participant was unable to finish, skipping questions to ensure all layout types were covered.
Verian recommended reducing text at specific questions where possible and prioritising question text over explanatory or instruction text. In addition, several questions were removed.
2.6 Usability testing for online surveys
Whilst no specific usability testing was carried out on the online survey script for the Community and Engagement Survey, learnings from two rounds of usability testing conducted by Verian were drawn into the questionnaire design. This work was carried out in September 2023 as part of a wider review and redesign of Verian’s online survey template involving both usability and accessibility testing.
For the usability testing, participants used a range of device types of varying screen sizes and were asked to complete a range of question types to assess the way the questions appeared on screen, how easy the layouts were to navigate, any sticking points in being able to provide answers, and anything else that participants might have had difficulty with or delayed their progress. After analysis of the findings from the usability testing, updates were made to the template, and a list of recommendations was drawn up on which question types to use in different scenarios. Verian employed these recommendations for the Community and Engagement Survey pilot.
An accessibility audit of a selection of Forsta+ layouts was carried out by the Digital Accessibility Centre (DAC) user/technical team in April 2024. The layouts and features were assessed against the Web Content Accessibility Guidelines WCAG 2.2, concentrating on barriers to accessibility, giving examples of any assistive technology barriers encountered and solutions for how to resolve them. Following implementation of the recommendations made by the DAC, the updated questionnaire was re-tested in June 2025, gaining accreditation for compliance with WCAG2.2 AA. The Community and Engagement Survey pilot online survey employed the accredited accessibility template.
2.7 Creating the finalised surveys (web and paper)
Following this analysis, review and implementation of changes from the cognitive testing and usability testing, the online version of the questionnaire was finalised and signed off by DCMS and MHCLG in August 2025 and the paper questionnaires were signed off by DCMS and MHCLG in October 2025.
3. Pilot Survey
3.1 Overview
The principal purpose of the pilot was to do a final ‘live’ check of the login procedures, the script routing, and the print and mailing logistics, including the management and monitoring systems. It was not used to test response levels because (i) there was insufficient time to run a full-length fieldwork trial (which would take about 8 weeks to complete) without compromising the quality of the development work, and (ii) granular legacy survey response data from 2024-25 was used for this purpose as a starting point.
Addresses were sampled, using the same contact protocol as the main stage. However, due to time constraints in the timetable:
-
It was not possible to include paper questionnaires (which for the main stage are available on request and targeted in some first and second reminder mailings).
-
Only one reminder letter was sent during the pilot.
3.2 Sample Design
For the pilot, Verian selected an implicitly stratified random sample of 1,000 addresses from a larger master sample previously drawn from the residential Postcode Address File (PAF) in England.
This master sample consisted of addresses originally sampled for the 2025 Participation Survey but not issued. Before selection, the master sample was coded with CACI household structure data[footnote 1] sorted by:
-
ITL2 area
- IMD quintile group
- Presence of any resident aged 65+ (based on CACI coding)
- Local authority
- ONS super output area geography
This sorting (‘implicit stratification’) of the master sample limited the variance between potential samples with respect to geodemographic distribution. Although the master sample was not itself an equal probability sample from the PAF, that was taken into account when calculating Community and Engagement Survey pilot selection probabilities such that any drawn pilot sample would be an equal probability sample from the residential PAF.
3.3 Methodology
The pilot survey used the following methodology:
-
Selected households were sent an invitation letter in the post explaining how to access and complete the survey. All members of the household aged 16 or over were eligible to participate in the survey.
-
The survey invitation letter contained the following:
-
The HM Government logo at the top
-
An explanation of the purpose of the DCMS Community and Engagement Survey
-
Who is eligible to participate and instructions on how to log into the survey
-
Information on the £10 voucher available on completion of the survey
-
Who is conducting the survey and how to get in touch with any questions or concerns
-
A second FAQ page covering summaries of further information and where more detailed information can be accessed
-
Due to time constraints on conducting the pilot, only one reminder letter was sent to non-responding households.
-
Paper surveys were not included in the pilot survey, again due to time constraints.
-
On completion of the survey, respondents were able to redeem their £10 voucher via a portal provided by Verian’s incentives partner Merit.
3.4 Fieldwork
Pilot survey invitation letters were dispatched on the 18th August 2025. Interviews were completed between the 21st August and 14th September, with the pilot survey closing on the 15th September 2025.
After the invite, one reminder was sent to households where not all expected members aged 16 or over had completed the survey. The letter was tailored depending on whether any members of the household had completed it. Posting dates and quantities for these letters are shown in Table 3.1.
Table 3.1: Posting dates of survey invitation letters and reminders
| Mailing type | Date | Households sent to |
|---|---|---|
| Survey invitation | 18th August 2025 | 1,000 |
| Reminder | 1st September 2025 | 965 |
To test both pathways through the survey, respondents were equally likely to be allocated to the Community and the Engagement pathways (100 and 99 completes respectively) when entering the survey. In the main survey, the allocation is weighted to the Community pathway in line with the sample targets. A random number generator was embedded within the questionnaire script to allocate individual web respondents to the appropriate sub-module within each pathway.
3.5 Response rates
From the 1,000 households that were issued for the pilot survey, 232 respondents interacted with the survey (from 171 households), and 199 respondents completed the survey (from 139 households).
This constitutes a 0.19 conversion rate (responses/sampled addresses), a 15.1% household-level response rate, and an individual-level response rate of 11.4%[footnote 2].
Figure 3.1: Pilot survey completion by day across the fieldwork period.
3.6 Interview length
The average survey length was calculated using data from all respondents who completed the pilot. This includes people who completed all questions up to the end of the ‘wellbeing’ module. To improve accuracy and reduce the impact of outliers, we excluded very short interview lengths and capped the maximum timings at the 97th percentile for each module. This ensures that unusually short or excessively long durations, likely due to erroneous entries, do not distort the overall average.
The average survey length for the two main routes through the questionnaire is shown below.
Community pathway
Mean = 42 minutes
Median = 39 minutes
Engagement pathway
Mean = 29 minutes
Median = 26 minutes
As the Community Pathway exceeded 30 minutes our key recommendation was to reduce the length of this pathway to minimise the risk of high dropout rates or respondent fatigue leading to poor quality data at the end of the interview.
3.7 Device type
The pilot was predominantly completed on smartphones, with nearly two-thirds of respondents using touch devices. The remaining third completed the survey on a computer.
Table 3.2: Breakdown of device type by respondent
| Device type | Count | Percentage |
|---|---|---|
| PC/Laptop/Netbook | 81 | 34.9% |
| Tablet/Smartphone | 151 | 65.1% |
3.8 Data Quality Assessment
We also explored key data quality indicators from the pilot, including item non-response, questionnaire length and drop-out rates. In addition, we analysed feedback provided through the open-ended text box at the end of the survey, where respondents were invited to share comments and suggestions.
3.8.1 Item non-response
Respondents are not required to answer every question in the survey and may select “Prefer not to say” or “Don’t know” where these options are provided. This type of data is referred to as item non-response. Generally, a lower rate of item non-response indicates higher data quality. However, it’s important to note that for certain questions, “Don’t know” can be a valid and meaningful response.
To assess item non-response, we reviewed all survey questions that included a “Prefer not to say” option and a “Don’t know” option to explore question specific item non-response. We found that certain questions were more likely to elicit a “Don’t know” response which suggests that these questions may have been problematic for respondents to answer. The most likely reasons for a higher level of item non-response to specific questions tends to be driven by the following:
Cognitive complexity: the question may be difficult to understand or interpret.
Hypothetical framing: the question asks respondents to consider a scenario they haven’t previously thought about or requires additional context to answer meaningfully.
Technical content: the question requires specific knowledge or information that respondents may not have readily available.
Extended recall period: the question refers to an activity that may have occurred a long time ago and may not be salient or memorable to the respondent.
For many survey questions, “Don’t know” is a reasonable, meaningful answer and is useful to gauge, but in other cases it can indicate a poorly constructed question that respondents struggle to understand. We highlighted questions with a relatively high proportion (>=7%) of ‘“Prefer not to say” and “Don’t know” responses below.
Several of the questions listed below (Figure 3.3) share characteristics that may present challenges for respondents. These include:
-
Requests for detailed or numerical data that may be cognitively demanding to estimate (e.g., number of hours spent on an activity).
-
Frequency-based questions that can be difficult for respondents to answer accurately, especially when their participation hasn’t been consistent over the past 12 months.
-
Questions requiring knowledge of the local area, which some respondents may not possess (e.g., availability of certain facilities).
-
Requests for personal information.
Since most of these are established survey questions with an existing time series, we are not proposing major changes. This exercise is primarily to flag that we can expect relatively high levels of item non-response for certain questions.
Table 3.3: Levels of item non-response by variable
| Variable description | % | Base[footnote 3] | Response option |
|---|---|---|---|
| Age | 6.6 | 228 | Prefer not to say |
| Age 2: Banded age | 26.7 | 15 | Prefer not to say |
| Cohab - whether cohabiting | 7.4 | 68 | Prefer not to say |
| Sfaith - What proportion of your friends are of the same religious group as you? | 20.5 | 112 | Don’t know |
| Seduc - What proportion of your friends have a similar level of education to you? | 11.6 | 112 | Don’t know |
| RelMix - In the last 12 months, have you mixed socially with people from different religious groups in any of the following places? By ‘mixed socially’, we mean interacting with someone more than just to say hello. | 21.0 | 38 | Don’t know |
| Ethmix - In the last 12 months, have you mixed socially with people from different ethnic groups in any of the following places? | 15.8 | 38 | Don’t know |
| Strust - Thinking about the people who live in this neighbourhood, to what extent do you believe they can be trusted? | 12.3 | 114 | Don’t know |
| Asset2 - For each of the following, please indicate whether there is at least one within a 15-20 minute walk from your home, further away but still in your local area, or there is not one in your local area at all. Community centre or hall. | 9.7 | 113 | Don’t know |
| Givamt - Approximately how much have you given to charity in the last 4 weeks? | 25.0 | 24 | Don’t know and prefer not to say |
| Causln - type of donation (local or international) | 8.0 | 24 | Don’t know |
| Arts - Are there opportunities to take part in arts and cultural activities, groups and events in your local area? | 31.7 | 104 | Don’t know |
| Artfreq - Over the last 12 months, how often have you taken part in arts and cultural activities, groups and events in your local area? | 8.2 | 61 | Don’t know |
| Heritage - Are there opportunities to visit heritage sites and places of historic interest in your local area? | 24.0 | 104 | Don’t know |
| Sportstp - Are there opportunities to take part in sport teams, clubs or classes or exercise at sports facilities in your local area? | 23.1 | 104 | Don’t know |
| Museum - Are there opportunities to visit museums or galleries in your local area? | 21.1 | 104 | Don’t know |
| Sportstpfreq - How often do you take part in sport teams, clubs or classes or exercise at sports facilities in your local area? | 10.7 | 75 | Don’t know |
| Sportswatch - Are there opportunities to watch people participate in sports activities or events in your local area? | 24.0 | 104 | Don’t know |
| sportswatchfreq - How often do you watch people participate in sports activities or events in your local area? | 9.0 | 67 | Don’t know |
| Assets2_child - Which of these do you have in your local area? Youth centre or club Healthcare service for children such as vaccination centres, child and adolescent mental health services. | 24.0/16.0 | 25 | Don’t know |
| lifeqchild_1 - Thinking about your local area, how much do you agree or disagree with the following statements? A. My local area is a good place to bring up children B. There are activities for children in my local area C. Children that live in my local area have a good quality of life D. Children that live in my local area have opportunities available to them E. It would be better to bring up children in a different area G. It is easy for children to get involved in crime or join gangs in my local area | Ranging from 9.7 for the first statement to 20.4 for the final statement | 103 | Don’t know |
| Empreppull - To what extent would you agree or disagree that people in your neighbourhood would pull together in an emergency or disaster? | 16.5 | 103 | Don’t know |
| carts2a_4 - How often in the last 12 months have you: Done painting, drawing, printmaking, calligraphy, colouring. | 6.9 | 29 | Don’t know |
| CARTS3 - How often in the last 12 months have you: Listened to downloaded music. | 7.9 | 38 | Don’t know |
| SSFEEHER - Thinking about the last time you visited a heritage site or place of historic interest in England, did you pay an entrance fee? | 7.5 | 40 | Don’t know |
| Voldon - Still thinking about the last time you visited a heritage site or place of historic interest in England, did you give a voluntary donation? | 7.5 | 40 | Don’t know |
| CMAJE12FUTNEW - Which of the following events would you be interested in participating in? | 10.0 | 40 | Don’t know |
| Cultsatis - To what extent do you agree or disagree with the following statement: I am satisfied with the quality of cultural activities near to where I live. | 9.8 | 51 | Don’t know |
| SSTEAMFUT - Still thinking about the subjects below (Science, Technology, Engineering, Arts and Maths). In the next 12 months, do you plan to take part in any activity or event connected with any of these subjects or sectors in your free time, outside your job or structured academic activities? | 22.0 | 50 | Don’t know |
3.8.2 Dropout rates
For the Engagement pathway through the questionnaire 88% of respondents completed the survey up to and including the loneliness question, chosen as the cut off point for a survey response to qualify as a ‘usable partial’ interview. The remaining 12% dropped out before reaching this question.
Dropout rates were higher than expected (6% in early 2024/25). This may have been due to a range of factors, but the main stage of fieldwork will be closely monitored to see if this trend continues, with further investigation to be undertaken if rates remain high.
For the Community pathway 83% of respondents completed the survey up to and including the anti-social behaviour question, which is the usable partial cut-off point. So, 17% dropped out.
Typically, similar surveys have a drop-out rate of 6-12% so 17% is higher than expected and is higher than the rate seen for the Engagement pathway through the questionnaire.
Again, the reasons behind the higher-than-expected dropout rates are not clear. However, the difference between the Engagement and Community pathways is likely driven by the longer interview length associated with the Community pathway. There is no consistent pattern in terms of the specific question at which respondents exit the survey.
3.8.3 Participant feedback
Feedback collected from participants of the pilot survey
At the end of the pilot respondents were given the opportunity to provide feedback on the survey. However, only 14 respondents chose to provide feedback, and 6 of these mentioned that the interview was too long. The key themes raised were as follows:
1. Survey Length and Completion Time
Six responses specifically mentioned that the survey was too long, with some describing it as tedious or disengaging toward the end.
Action: Reduce the Community pathway through the questionnaire. In future it is also worth considering if the length could be reduced further given potential shifts in people’s engagement online.
2. Question Clarity and Relevance
Two comments highlighted confusion or lack of relevance in questions, particularly for those not in work. This seemed to relate to LIFESAT, option B.
Action: Option B in the question below was updated so that it is only asked of those who in work.
Overall, how satisfied or dissatisfied are you with…
A. your income?
B. your job?
C. your health?
3. Emotional Sensitivity
One response flagged that some questions were triggering. The open-ended question appears before the close screen which includes information for further support.
Action: Review the necessity of including sensitive questions and remove any that are not essential. In addition, at the end of the survey, we do also provide a link to relevant support organisations to assist respondents who may need help. This is also available on the survey website
Feedback collected from participants of the main stage of quarter 1 fieldwork
Participants in the main stage of quarter 1 fieldwork who were recruited to take part in stage 2 of the cognitive testing (section 2.2.4) were asked, time-permitting, to provide some feedback on their experience of taking the survey. This was in the form of anything positive or negative that stood out to them, and their thoughts on the survey materials. The key points from this feedback were as follows:
-
Overall experience and impressions of the survey were positive. However, whilst this wasn’t necessarily a barrier to them participating, some pointed out that it was a long survey to complete.
-
There was some recall of the ‘On His Majesty’s Service’ line printed on the survey envelope. Although this could initially be alarming, it did provide the necessary intrigue for people to open the envelope to see what was inside.
-
The letter contained a lot of information on the front page.
-
The Government logo on the letterhead added a layer of legitimacy and validity to the survey and the request to participate. Some felt it was a privilege to have been selected (even if the selection was random).
-
The QR code to access the survey website login page appealed to some people but others preferred to type in the full web address to further check the legitimacy.
-
The ‘How to take part’ section of the letter was useful for most people. There was a comment that if you reside in a single-adult household, it might be confusing to receive multiple login details.
-
The incentive value of £10 felt about right to people. Taking part was important and was more about the social value of the survey, and what impact it might have on the local area. This view might not be reflective of those who chose not to take part.
-
Not much attention was paid to the ‘Further information’ on the reverse side of the letter. One participant had not even noticed it. If it had been spotted, people tended to scan it rather than read it fully.
-
Hosting the website on a dcms.gov.uk domain added a more formal or official slant to the survey and helped to confirm legitimacy.
-
Whilst the Community Life Survey website was more attractive due to its ‘town’ graphic, the Community and Engagement Survey website gave the impression that responses to the survey will be taken more seriously.
-
Aside from the front page and clicking the green login button, participants paid little or no attention to anything else on the survey website.
-
No issues were uncovered with the process of logging into the online survey.
-
CACI is a data supplier used to add the expected number of resident adults in each ten-year age band to the Master sample. ↩
-
Individual response rates are calculated as the number of individual responses divided by the number of issued addresses, adjusted for an estimated 8% of deadwood addresses and multiplied by the average number of adults per household (1.89). Household response rates are calculated as the number of households from which at least one response is received, adjusted for an estimated 8% of deadwood addresses. ↩
-
Please note that the data includes partial cases, and in some instances, filtering has resulted in relatively small base sizes. We have excluded any items with a base size below 20, except for age band, which is included for reference. ↩