Official Statistics

Community Life Survey 2024/25: Technical report

Updated 10 December 2025

Applies to England

November 2025

© Verian Group 2025

1. Introduction

1.1 Background to the survey

The Community Life Survey has been conducted by Verian on behalf of the Department for Culture, Media and Sport (DCMS) since 2012.  The Community Life Survey is a nationally representative annual survey of adults (16+) in England that aims to track the latest trends and developments across areas that are key to encouraging social action and empowering communities. More information and historical data and reports can be found on gov.uk [footnote 1]

Verian was commissioned to deliver the Community Life Survey for 2023/24 and 2024/25. These survey years have been commissioned by DCMS in partnership with the Ministry of Housing, Communities and Local Government (MHCLG), and employ an annual boosted sample approach (n=175,000) to enable production of reliable statistics at lower-tier local authority level and expand the questionnaire to incorporate new themes (e.g. perceptions of community in respondents’ local area). This aims to inform cross-government work on these issues, including MHCLG’s evaluation of the UK Shared Prosperity Fund. More information and supporting documents on this can be found on gov.uk [footnote 2], with findings to be published in due course.

The scope of the survey was to deliver a nationally representative sample of adults (aged 16 years and over) in England. The data collection model for the Community Life Survey is based on Address-Based Online Surveying (ABOS), a type of ‘push-to-web’ survey method. Respondents take part either online or by completing a paper questionnaire. 

This technical report covers the 2024/25 Community Life Survey fieldwork which ran from the 4th October 2024 to the 7th April 2025. The annual technical report for 2023/24, as well as the annual report and reference tables, have also been published on gov.uk [footnote 3].

1.2 Note on survey timings and seasonality effect

As with 2023/24, the sample in 2024/25 consisted of approximately 175,000 interviews across two quarters of fieldwork (October-December 2024 and January-April 2025). Condensing the survey into two quarters was necessary due to the overarching needs of the Community Life Survey and connected research. There is precedent for this approach, with previous waves of both the Community Life Survey and the Participation Survey (DCMS’s two social surveys) run across fewer than four quarters.  In addition, questions in these surveys ask the respondent for their recall over “the last 12 months” which helps to mitigate for seasonality effect.

During the 2023/24 survey year, Verian ran an analysis of the effect of conducting the Community Life Survey across fewer than four quarters. The analysis compared the results of 22 key variables in the survey across the pre-pandemic time periods (2013-2020), specifically focusing on the differences between the results from surveys in Spring/Summer, and those in Autumn/Winter. Please see the 2023/24 technical report section 1.2 for a full summary of this analysis [footnote 4]

1.3 Survey objectives

The Community Life Survey provides Official Statistics on issues that are key to encouraging social action and empowering communities, including volunteering, charitable giving, community engagement and loneliness. The key objectives of the survey are to:

  • Provide robust, nationally representative data on behaviours and attitudes within communities that can be used to inform and direct policy and research in these areas.

  • Provide a key evidence source for policy makers in government, public bodies, voluntary, community and social enterprise (VCSE) sector organisations and other external stakeholders.

1.4 Survey design

The basic ABOS design used in previous years of the survey was unchanged for 2024/25: a stratified random sample of addresses was drawn from the Royal Mail’s postcode address file (PAF) and an invitation letter was sent to each one, containing username(s) and password(s) plus the URL and QR code of the survey website. Sampled individuals were able to log on using this information and complete the survey as they might any other web survey. Once the questionnaire was complete, the specific username and password could not be used again, ensuring data confidentiality from others with access to this information. 

The survey design included an alternative mode of completion in the form of a paper questionnaire. The invitation letter offered this mode on request, and up to two copies were included in the first or second reminder letter for a proportion of the sampled addresses. More details on this can be found in the Contact Procedures section.

Paper questionnaires ensure coverage of the offline population and are especially effective with sub-populations that respond to online surveys at lower-than-average levels. However, there are limitations in the use of paper questionnaires: 

  • The physical space available on paper for questions

  • The level of complexity that can be used in routing from one question or section to another

  • The length of time a person is willing to spend completing a paper questionnaire

  • The cost of administering a paper questionnaire compared to an online one 

  • The difficulty of incorporating a modular system of questions within a paper questionnaire design

For these reasons, the Community Life Survey, used paper questionnaires in a limited and targeted way, to optimise rather than maximise response. More details on the differences between the online and paper questionnaires can be found in the Questionnaire section below.

2. Sampling 

2.1 Sample design: addresses

The address sample design was intrinsically linked to the data collection design (see ‘Details of the data collection model’ below) and was designed to yield a respondent sample that was as representative as possible of the adult population within each of the 296 lower tier or unitary local authorities in England.

The design sought a minimum two-quarter respondent sample size of 500 within each local authority (excepting the Isles of Scilly where the target was 250) and 2,710 within each ITL2 region [footnote 5]. The actual targets varied between local authorities (from 500 to 2,695) and between ITL2 regions (from 2,710 to 12,160). This variation maximised the statistical efficiency of the national sample while also accommodating the local and regional sample size requirements. Although there were no specific targets per fieldwork quarter, the sample selection process was designed to ensure that the respondent sample size per local authority and ITL2 region was approximately the same per quarter.

As a first step, a stratified master sample of just over 878,000 addresses in England was drawn from the PAF ‘small user’ subframe. Before sampling, the PAF was disproportionately stratified by local authority (296 strata) and, within local authority, the PAF was sorted by (i) neighbourhood deprivation level (5 groups), (ii) super output area, and finally (iii) by postcode. This ensured that the master sample of addresses was geodemographically representative within each local authority.

This master sample of addresses was then augmented by data supplier CACI. For each address in the master sample, CACI added the expected number of resident adults in each ten-year age band. Although this auxiliary data will have been imperfect, Verian’s investigations have shown that it is highly effective at identifying households that contain people aged 65 or older. Once this data was attached, the master sample was additionally coded with expected household age structure based on the CACI data: (i) all aged under 65; (ii) at least one aged 65 or older.

The second stage sampling probability of each address in the master sample was determined by the expected number of completed questionnaires from that address [footnote 6] given the selected data collection design: where this was lower than the average, the sampling probability was higher than the average, and vice versa. By doing this, Verian compensated for any (expected) variation in response rate that could not be fully ‘designed out’, given the constraints of budget and timescale. The underlying response assumptions were derived from empirical evidence obtained from the 2023-24 Community Life Survey.

Verian drew a stratified second stage random sample of 653,915 addresses from the master sample of 878,119 and then systematically subdivided this second stage sample into twenty-seven equal-size/equal-profile ‘replicates’, each comprising a little over 24,200 addresses. This process of sample subdivision into replicates was intended to help manage fieldwork. The expectation was that only the first eighteen replicates would be issued (that is, approximately 508,601 addresses), with the last nine replicates kept back in reserve. In each quarter, the plan was to stagger the sample issue by issuing three replicates on each of three occasions (so, nine per quarter, eighteen across both quarters).

Verian’s plan was to review fieldwork outcomes at a local authority level (i) before the third issue in quarter 1, (ii) before the first issue in quarter 2, and finally (iii) before the third issue in quarter 2. At each review point, Verian intended to recalculate the number of replicates to issue per local authority per subsequent issue. This review process was designed to adjust the sample issue to account for the latest response rates and cumulative sample sizes per local authority, and to thereby maximise the probability of achieving each of the local authority targets.

The number of replicates issued per replicate per local authority varied from 14 to 27, averaging at 21 compared to the planned 18. An additional set of 2,521 addresses was added to the third issue of quarter 2. These addresses were drawn from the pool of master sample addresses remaining after the initial sample of 653,915 had been drawn (essentially reserve sample). These extra addresses were drawn in twelve local authorities where response rates had been particularly low against expectation. In total, 229,253 addresses were issued for quarter 1, and 285,505 addresses were issued for quarter 2.

2.2 Sample design: individuals within sampled addresses

All resident adults aged 16 or over were invited to complete the survey. In this way, the Community Life Survey avoided the complexity and risk of selection error associated with remote random sampling within households. 

However, for practical reasons, the number of logins provided in the invitation letter was limited. The number of logins was varied between two and four, with this total adjusted in reminder letters to reflect household data provided by prior respondent(s). Addresses that CACI data predicted contained only one adult were allocated two logins; addresses predicted to contain two adults were allocated three logins; and other addresses were allocated four logins. The mean number of logins per address was 2.8. Paper questionnaires were available to those who are offline, not confident online, or unwilling to complete the survey online.

2.3 Details of the data collection model

Four different data collection designs were used for the Community Life Survey in 2024-25. Each had a code reference that showed the number of mailings and type of each mailing: push-to-web (W) or mailing with paper questionnaires included (P). For example, ‘WWP’ meant two push-to-web mailings and a third mailing with up to two paper questionnaires included alongside the web survey login information. In general, there was a two-week gap between mailings. The allocations across the two quarters can be found in Figure 2.1.

Figure 2.1: Allocation of data collection designs across the Community Life Survey 2024/25

Data collection design Q1 allocation (%) Q1 allocation (n=) Q2 allocation (%) Q2 allocation (n=) Total allocation (%) Total allocation (n=)
Total 100% 229,252 100% 285,505 100% 514,757
WW 35.2% 80,722 34.1% 97,306 34.6% 178,028
WP 14.3% 32,819 14.0% 40,052 14.2% 72,871
WWW 47.1% 107,887 48.3% 137,886 47.7% 245,773
WWP 3.4% 7,824 3.6% 10,261 3.5% 18,085

Only addresses coded by CACI as containing somebody aged 65 or over could be allocated to the WP or WWP designs (i.e., receive a mailing with paper questionnaires included). Each of these ‘older household’ addresses had a 75% probability of being allocated to one of these designs. This targeted approach was based on historical data Verian has collected through other studies, which suggests that provision of paper questionnaires to all addresses simply displaces online responses in some strata. 

Other than that, addresses were allocated to whichever data collection design would be expected to yield a mean of at least 0.35 completed questionnaires. If the expected yield under a two-mailing design was under 0.35, the address was allocated to a three-mailing design instead. The regression model used to make this choice incorporated response rates from earlier waves of the Community Life Survey, as well as various measures from the census small area data, CACI data, and index of multiple deprivation.

3. Questionnaire

3.1 Questionnaire development

The online questionnaire was designed to take an average of 30 minutes to complete. A modular design was used with approximately three-quarters of the questionnaire made up of a core set of questions asked of the full sample, and the remaining questions split into three separate modules randomly allocated to a subset of the sample. Applying a modular design to the survey created space for the expanded questionnaire.

The postal version of the questionnaire was designed, as far as possible, to be equivalent to the online version. However, there were some limitations to this, namely:

  • Space – How many questions could reasonably fit into a paper version of the questionnaire within printing limits

  • Time – Avoid overly burdening respondents and keep within a time limit to encourage response

  • Budget constraints – It was not possible to produce multiple versions of the paper questionnaire, so the modular content was removed

  • Complexity – The survey already contained variants in design related to contact methods and volumes, online and paper versions, and question modules (online only). Introducing different versions of the paper questionnaire was felt to be too challenging

The paper questionnaire included all questions published as part of the annual statistical release, questions that were of critical importance to policy clients, and demographic questions used in analysis or weighting. The question wording used in both the online and paper versions was as closely matched as possible, and in total the paper questionnaire covered around 50% of the material in the online questionnaire. 

Copies of both the online and paper questionnaires can be found in the Appendix A and B respectively.

3.2 Questionnaire changes for 2024/25

Major changes were made to the questionnaire prior to the launch of the 2023/24 Community Life Survey.  Full details can be found in Appendix B of the 2024/25 quarter 1 technical report [footnote 7].

For the start of the 2024/25 Community Life Survey, changes were kept to a minimum, focused on addressing any issues identified from the 2023/24 survey and inserting a new question measuring local news.     

A summary of the changes can be found in Figure 3.1 below.

Figure 3.1: Summary of amends made to questionnaire ahead of 2024/25 Community Life Survey fieldwork 

Online questionnaire variable name Paper questionnaire variable name (if applicable) Brief variable description Amend
NUMADULTS Q1 Number of adults in household Wording
NAMADULT N/A Information on household makeup Deleted
NCHIL2 N/A Number of children under 16 in household who are respondent’s Wording 
FrndRel2 Part of Q9 Frequency of social interaction by phone, internet etc. Wording
P_Oth N/A Involvement in groups, clubs & organisations last 12 months - other Wording correction
LocalNews Q56b Frequency of engagement with local news New question
ConfLS Q88 Confidence re. life skills  Online questionnaire - renamed from Q1a to ConfLS
QFraud/QfraudName Q100 Acknowledgement of honest answers Changed to collect a cross-box rather than respondent’s name

No changes were made to the questionnaire between the quarters of fieldwork in the 2024/25 Community Life Survey.

4. Fieldwork

4.1. Quarterly fieldwork dates

Fieldwork for the Community Life Survey 2024/25 was conducted over two quarters; the first in the last three months of 2024 and the second in the first three months of 2025. Full details of the dates per batch can be found in Figure 4.1 below. 

Figure 4.1: Fieldwork start and close dates for quarters 1 and 2 of the Community Life Survey 2024/25

  Start date Close date online survey Close date paper survey
Quarter 1      
Batch 1 4th October 2024 6th January 2025 13th January 2025
Batch 2 24th October 2024 6th January 2025 13th January 2025
Batch 3 6th November 2024 6th January 2025 13th January 2025
Quarter 2      
Batch 1 7th January 2025 2nd April 2025 7th April 2025
Batch 2 29th January 2025 2nd April 2025 7th April 2025
Batch 3 12th February 2025 2nd April 2025 7th April 2025

4.2. Contact procedures

All selected addresses were sent an initial invitation letter containing the following information: 

  • A brief description of the survey

  • The URL of survey website (used to access the online script)

  • A QR code that could be scanned to access the online survey

  • Login details for the required number of household members

  • An explanation that participants would receive a £10 shopping voucher for completing the survey

  • Information about how to contact Verian in case of any queries or to request a paper questionnaire

  • Frequently asked questions, and responses to these (including how to access the survey privacy notice)      

All partially or non-responding households were sent one reminder letter at the end of the second week of fieldwork. A further targeted second reminder letter was sent to households for which, based on Verian’s ABOS field data from previous studies, this was deemed likely to have the most significant impact (mainly deprived areas and addresses with a younger household structure). Figure 4.2 shows the anticipated drop dates of the invite and reminder letters that were sent out in 2024/25.

Figure 4.2: Anticipated drop dates of invite and reminder mailings for the Community Life Survey 2024/25

  Invite letter Reminder 1 letters Reminder 2 letters
Quarter 1      
Batch 1 4th October 2024 17th October 2024 31st October 2024
Batch 2 24th October 2024 7th November 2024 22nd November 2024
Batch 3 6th November 2024 21st November 2024 4th December 2024
Quarter 2      
Batch 1 9th January 2025 23rd January 2025 6th February 2025
Batch 2 31st January 2025 13th February 2025 28th February 2025
Batch 3 14th February 2025 27th February 2025 13th March 2025

The information contained in the reminder letters was similar to the invitation letters, with slightly modified messaging to reflect each reminder stage. Copies of the quarter 2 invite and reminder letters can be found in Appendix C.

In total, just under 1.25 million mailings were sent out for the Community Life Survey 2024/25. This covered invites, reminders, and ad hoc requests for paper questionnaires. The breakdown of these mailings can be found in Figure 4.3.

Figure 4.3: Volumes of invite, reminder, and ad hoc requests for paper questionnaire mailings despatched to sampled households for the Community Life Survey 2024/25

  Quarter 1 Quarter 2 Total
       
Invite letter 229,252 285,505 514,757
       
Reminder 1 letter - web only (one-reminder strategy) 76,253 91,823 168,076
Reminder 1 letter - one paper survey included (one-reminder strategy) 1,206 1,497 2,703
Reminder 1 letter - two paper surveys included (one-reminder strategy) 29,756 36,385 66,141
Reminder 1 letter - web only (two-reminder strategy) 110,451 141,919 252,370
       
Reminder 2 letter - web only 97,932 124,821 222,753
Reminder 2 letter - one paper survey included 263 389 652
Reminder 2 letter - two paper surveys included 6,711 8,797 15,508
       
Ad hoc paper survey mailing - one paper survey 1,634 1,631 3,265
Ad hoc paper survey mailing - two paper surveys 344 355 699
Ad hoc paper survey mailing - four paper surveys 31 41 72
Ad hoc paper survey mailing - one large-print paper survey 5 1 6
Ad hoc paper survey mailing - two large-print paper surveys 1 0 1
Ad hoc paper survey mailing - three large-print paper surveys 2 0 2

4.3. Accessibility

The survey website was designed to be accessible to as many people as possible, taking into account best practice web accessibility guidance. The survey interface has been audited by the Bureau of Internet Accessibility.

Accessibility features include:

  • Ability to change contrast levels, colours and fonts on some browsers

  • Zoom of up to 300% with text staying visible on screen

  • Navigate most of the survey using just a keyboard

  • Listen to most of the survey using a screen reader

For respondents who proactively requested - or received as part of the contact method - a paper survey, there was the option to request a large print survey that helped to counter some sight issues. In total across the two quarters of fieldwork, 14 large print questionnaires were dispatched to nine households who requested them.

4.4. Confidentiality

Each of the letters assured the respondent of confidentiality, by answering the question “Is this survey confidential?” with the following:

“Yes. The information that is collected will only be used for research and statistical purposes. Your contact details are kept separate from your answers and will not be passed on to any other organisation outside of Verian or supplier organisations who assist in running the survey. 

Data from the survey will be shared with DCMS and MHCLG, and their appointed research contractors, for the purpose of producing and publishing statistics, and policy evaluation. The data shared with these parties won’t contain your name or contact details, and no individual or household will be identifiable from the results unless we have asked, and you have consented, for an alternative arrangement. Your answers will be combined with others participating in the survey. You will not receive any ‘junk mail’ after taking part. For more information about how we keep your data safe, you can visit www.commlife.co.uk.”

4.5. Fieldwork performance

This section outlines the fieldwork figures and response rates achieved on the 2024/25 survey. Figure 4.4 shows the number of completed surveys by mode and quarter prior to data quality checks being carried out.

Figure 4.4: Total completed online and paper surveys prior to data quality checks - Community Life Survey 2024/25

  Quarter 1 Quarter 2 Total
Online surveys 73,176 90,901 164,077
Paper surveys 9,303 12,056 21,359
Total surveys 82,479 102,957 185,436

Following data quality checks (see section 5.3 for details), 7,687 surveys were removed, leaving 177,749 respondents in the final dataset: 156,679 online surveys and 21,070 paper surveys. This constituted a conversion rate of 34.53%. 

When discussing fieldwork figures in this section, response rates are referred to in two different ways:

Household response rate – This is the percentage of households contacted as part of the survey in which at least one questionnaire was completed.

Individual response rate – This is the estimated response rate amongst all adults that were eligible to complete the survey.

In a survey of this nature, no information is known about the reason for non-response in each individual household. However, it can be assumed that 8% of the addresses in the sample were not residential and were therefore ineligible to complete the survey. 

The respondents in the final dataset were drawn from 119,772 households, which represented a household-level response rate of 25.29%.

The expected number of eligible individuals per residential address was averaged at 1.89 per address, therefore the total number of eligible adults sampled was 895,059. The calculated individual-level response rate was 19.86% (online-level response rate: 17.50%; paper-level response rate: 2.35%) [footnote 8].

For the online survey, after the data quality check removals, the average and median completion times are in Figure 4.5.

Figure 4.5: Average and median completion times of online Community Life Survey 2024/25

  Quarter 1 Quarter 2
Average completion time 26:54 26:47
Median completion time 24:49 24:41

4.6. Fieldwork error

During the quarter 2 fieldwork, an error was identified that had resulted in in 10,941 respondents (in the final data set after completion of quality-check removals) not being allocated to any of the three sub-sample questionnaire modules whilst completing the survey. Full details can be found in the quarter 2 technical report [footnote 8]. Figure 4.6 below summarises the number of cases impacted by this error.

Figure 4.6: Details of sub-sample module allocation and impact of fieldwork error on Community Life Survey 2024/25

  Sub-sample module 1 Sub-sample module 2 Sub-sample module 3 Total
Variables RelMix
EthMix
FrndSat1
FrndSat2SR
Counton1
NBarr
NComfort1
NComfort2
NComfort3
STrustGen2
SatAsset
FUnHrs
FIndGpA
VolBen
VolUnPd
BVLon
VYFreq
VYStop
BVHelp
VBarr
IHlpHrs
QScreenA-E
IntUsage1
IntUsage2
PIfHow
PIfEas
GIntro1
GGroup
GivAmt
CausLN
LocPeopNew
LocAct
LocHow
LocMot2
LocWant
LocBarr1
 
Respondents in cleaned data allocated to sub-sample module (n=) 52,353 52,404 51,922 156,679
Respondents completing sub-sample module (n=) 48,840 48,756 48,142 145,738
Respondents missing due to error (n=) 3,513 3,648 3,780 10,941
Proportion missing due to error (%) 6.7% 7.0% 7.3% 7.0%

As a consequence, different weighting is required to interpret and analyse some of the modularised questions in the annual data. Please see the Weighting section below which covers this.

4.7. Incentive System

As a thank you for taking part, all respondents that completed the Community Life Survey received an incentive voucher worth £10. 

Online incentives

Participants completing the survey online were provided with details of how to claim their e-voucher at the end of the survey and were directed to the website, where they could select from a range of different e-vouchers, including vouchers sent via email, or a charitable donation. 

In a small number of instances, respondents completing the online survey requested a physical paper voucher as a substitute for their e-voucher. This was due to a lack of confidence in using digital systems to select an e-voucher and use it.

Paper incentives

Respondents who returned the paper questionnaire were also provided with a £10 shopping voucher. This voucher was sent in the post and could be used at a variety of high street stores.

5. Data processing

5.1. Data management

Due to the different structures of the online and paper questionnaires, data management was handled separately for each mode. Online questionnaire data was collected via the web script and, as such, was much more easily accessible. By contrast, paper questionnaires were scanned and converted into an accessible format.

For the final outputs, both sets of interview data were converted into IBM SPSS Statistics, with the online questionnaire structure as a base. The paper questionnaire data was converted to the same structure as the online data so that data from both sources could be combined into a single SPSS file.

5.2 Partial completes

Online respondents were able to exit the survey at any time, and while they could return to complete the survey at a later date, some chose not to do so. Equally respondents completing on paper occasionally left part of the questionnaire blank, for example if they did not wish to answer a particular question or section of the questionnaire. 

Partial data can still be useful, providing respondents have answered the substantive questions in the survey. These cases are referred to as usable partial interviews.

Survey responses were checked at several stages to ensure that only usable partial interviews were included. Upon receiving returned paper questionnaire, the booking-in team removed obviously blank paper questionnaires. Following this, during data processing, rules were set for the paper and online surveys to ensure that respondents had provided sufficient data. 

For the online survey, respondents had to reach the questions relating to their qualifications for their data to count as valid. This was by either answering yes at the question ‘Degree’ or giving any answer at ‘Quals’ Paper data was judged complete if the respondent answered at least 50% of the questions or reached and answered Q46. This was the last question of Section 11: ‘Local area involvement’. 

5.3. Data quality checks

Initial checks were carried out to ensure that paper questionnaire data had been correctly scanned and converted to the online questionnaire data structure. For questions common to both questionnaires, the SPSS output was compared to check for any notable differences in distribution and data setup.

Once any structural issues had been corrected, further quality checks were carried out on the online and paper responses to identify and remove any invalid interviews. The specific checks were as follows:

  1. Selecting complete interviews: Any test serials in the dataset (used by researchers prior to survey launch) were removed. Cases were also removed if the respondent reached - but did not answer - the fraud declaration statement (online: QFraud; paper: Q99).

  2. Duplicate serials check: If any individual serial had been returned in the data multiple times, responses were examined to determine whether this was due to the same person completing multiple times or due to a processing error. If they were found to be valid interviews, a new unique serial number was created, and the data was included in the data file. If the interview was deemed to be a ‘true’ duplicate, the more complete or earlier interview was retained.

  3. Duplicate emails check: If multiple interviews used the same contact email address, responses were examined to determine if they were the same person or multiple people using the same email. If the interviews were found to be from the same person, only the most recent interview was retained. In these cases, online completes were prioritised over paper completes due to the higher data quality.

  4. Interview quality checks: A set of checks on the data were undertaken to check that the questionnaire was completed in good faith and to a reasonable quality. Several parameters were used:

  • Interview length (online check only)

  • Number of people in household reported in interview(s) vs number of total interviews from household.

  • Whether key questions had valid answers.

  • Whether respondents had habitually selected the same response to all items in a grid question (commonly known as ‘flatlining’) where selecting the same responses would not make sense.

  • How many multi-response questions were answered with only one option ticked.

Across the two quarters of fieldwork, 187,537 surveys were returned (see Figure 5.1 for a breakdown). At the conclusion of the respective quarterly quality checking processes, 10,661 (5.7%) respondents were removed, which was within the expected range of removals of 5-6%. This left a total of 176,876 respondents in the final dataset. The breakdown of removals by mode is in Figure 5.1.

Figure 5.1: Breakdown of data quality check removals by mode - Community Life Survey 2024/25

  Quarter 1 Quarter 2 Total
Online surveys      
Received 73,176 90,901 164,077
Removed 3,242 4,156 7,399 
Percentage removed 4.4% 4.6% 4.5%
Paper surveys      
Received 9,303 12,056 21,359
Removed 127 162 290 
Percentage removed 1.4% 1.4% 1.4%
Total surveys      
Received 82,479 102,957 185,436
Removed 3,369 4,318 7,687 
Percentage removed 4.1% 4.2% 4.1%
Total surveys in final dataset      
Online 69,934 86,745 156,679 
Paper 9,176 11,894 21,070 
Total 79,110 98,639 177,749

5.4. Data checks and edits

Upon completion of the general quality checks described above, more detailed data checks were carried out to ensure that the correct questions had been answered according to questionnaire routing. This was generally all correct for all online completes, as routing is programmed into the scripting software, but for paper completes, data edits were required.

There were two main types of data edit, both affecting the paper questionnaire data:

  1. Single-response questions edits: If a paper questionnaire respondent had mistakenly answered a question that they weren’t supposed to, their response in the data was changed to “-1: Item not applicable”. If a paper questionnaire respondent had neglected to answer a question that they should have, they were assigned a response in the data of “-5: Not answered but should have (paper)”. Where the respondent had selected multiple answers, their response was changed in the data to “-6: Multi-selected for single response (paper)”

  2. Multiple response question edits: If a paper questionnaire respondent had mistakenly answered a question that they weren’t supposed to, their response was set to “-1: Item not applicable”. If a paper questionnaire respondent had neglected to answer a question that they should have, they were assigned a response in the data of “-5: Not answered but should have (paper)” [footnote 9]. Where the respondent had selected both valid answers and an exclusive code such as “None of these”, any valid codes were retained, and the exclusive code response was set to “0”.

5.5. Coding

Post-interview coding was undertaken by members of Verian’s coding department. The coding department coded verbatim responses, recorded for ‘other specify’ questions.

For all ‘other specify’ questions data edits were made to move responses coded to “Other” to the correct response code if the answer could be back coded to an existing response code.

As an example, please see the following question:

RETAILSAT

What are the reasons you are [very/fairly] satisfied with the shops and retailers available in your local area? 

  1. Easy to get to

  2. They have all the basic essentials I need

  3. There are a wide range of goods and services to choose from

  4. Reasonably priced

  5. Independent or locally run 

  6. Some other reason (please type in)

Don’t know

If a respondent selected “Some other reason” at this question and wrote text that said they were satisfied with the shops in their area because they are nearby, in the data they would be back coded to the code “Easy to get to”.

Where “Other” responses could not be back coded to an existing code and where the number of mentions of a particular response was given by at least 2% of those answering the question, new codes were opened to reflect these responses where appropriate – and the relevant responses were coded to these new codes accordingly.

5.6. Data outputs

Once the checks were complete a final SPSS data file was created that only contained valid interviews and edited data. From this dataset, a set of data tables and Local authority maps were produced. The coverage and format of the data tables were agreed by DCMS and MHCLG, to cover key questions of interest.  Quarterly tables cover national data only; local authority level data tables and maps have been published alongside the annual release.

5.7. Derived variables

A list of the main derived variables in the Community Life Survey 2024/25 dataset can be found in Appendix D.

The following geo-demographic variables were added to the dataset:

  • Region (formerly Government Office Region)

  • Urban/rural indicator

  • Percentage of households in the Ward headed by someone from a non-white ethnic minority group

  • Inner city PSU indicator

  • Police Force Area

  • ACORN classification

  • ONS ward classification

  • Health board

  • Primary Care Organisation

  • LSOA area

  • ONS district level classification

  • Output area classification

  • Indices of multiple deprivation quintile

  • Minority Ethnic Density

  • Index of Multiple Deprivation

  • Income deprivation for England

  • Employment deprivation for England

  • Health deprivation for England

  • Education, Skills and Training deprivation for England

  • Barriers to housing and services deprivation for England

  • Crime and disorder deprivation for England

  • Living and environment deprivation for England

6. Weighting

6.1. Weighting process

A three-step weighting process was used with each quarterly dataset, to compensate for variation within the respondent sample with respect to both sampling probability and response probability:

  1. An address design weight was created equal to one divided by the sampling probability; this also served as the individual-level design weight because all resident adults could respond.

  2. The expected number of responses per address was modelled as a function of data available at the neighbourhood and address levels. The step two weight was equal to one divided by the predicted number of responses.

  3. The product of the first two steps was used as the input for the final step: calibration. The responding sample was calibrated to the 2024 Annual Population Survey (APS) with respect to (i) sex by age, (ii) educational level by age, (iii) ethnic group, (iv) housing tenure, (v) ITL1 region, (vi) employment status by age, and (vii) household size.

The combined (annual) 2024-25 dataset was further calibrated to ensure that the sex/age distribution within each local authority matched that of the most recent (2023) mid-year population estimates published by ONS, with respect to: men aged 16-34, men aged 35-64, men aged 65+, women aged 16-34, women aged 35-64, and women aged 65+. The published mid-2023 population estimates were adjusted very slightly to ensure no conflict with the national sex/age population totals – based on the 2024 Annual Population Survey – that were also included in the calibration matrix.

An equivalent weight was also produced for the (majority) subset of respondents who completed the survey by web. This weight was needed because some items were included in the web questionnaire but not the paper questionnaire. 

For 2024-25 only, an additional weight was produced for the subset of respondents who completed the survey by web, including a sub-sample module. As noted in section 4.6, due to a temporary programming error, a little over 10,000 web respondents in quarter 2 were not allocated a sub-sample module. This weight should be used for sub-sample module analysis.

It should be noted that the weighting only corrects for observed bias (for the set of variables included in the weighting matrix) and there is a risk of unobserved bias. Furthermore, the raking algorithm used for the calibration only ensures that the sample margins match the population margins. There is no guarantee that the weights will correct for bias in the relationships between the variables.

The final weight variables in the dataset are:

  • ‘Annual_finalweight’ – to be used when analysing data available from both the web and paper questionnaires.

  • ‘Annual_finalweightweb’ – to be used when analysing data available only from the web questionnaire.

  • ‘Annual_finalweightwebfull’ – to be used when analysing sub-sample module data.

Grossing weight equivalents are also available (weights that sum to the population rather than the sample size).

  1. https://www.gov.uk/government/collections/community-life-survey–2 

  2. https://www.gov.uk/government/publications/uk-shared-prosperity-fund-prospectus 

  3. https://www.gov.uk/government/statistics/community-life-survey-202324-annual-publication 

  4. https://www.gov.uk/government/statistics/community-life-survey-202324-annual-publication/community-life-survey-202324-technical-report#introduction 

  5. International Territorial Level (ITL) is a geocode standard for referencing the subdivisions of the United Kingdom for statistical purposes, used by the Office for National Statistics (ONS). Since 1 January 2021, the ONS has encouraged the use of ITL as a replacement to Nomenclature of Territorial Units for Statistics (NUTS), with lookups between NUTS and ITL maintained and published until 2023. 

  6. Using data from 2023-24, extensive modelling was carried out to determine the likely response level under various different potential data collection designs and as a function of data that can be attached to all sampled addresses: effectively Census and Census-derived data plus the CACI (modelled) household age structure data. The Census data was compacted into six ‘factor component’ scores that, between them, cover the majority of the between-neighbourhood (output area) variation in Census data. 

  7. https://www.gov.uk/government/statistics/community-life-survey-october-to-december-2024-quarterly-release/community-life-survey-october-to-december-2024-technical-note-appendix-b-questionnaire-changes 

  8. https://gov.uk/government/statistics/community-life-survey-january-to-march-2025-quarterly-publication/community-life-survey-january-to-march-2025-technical-note#fieldwork  2

  9. There was one exception to this rule. For the question Assets2: “For each of the following, please indicate whether there is at least one within a 15–20-minute walk from your home, further away but still in your local area, or there is not one in your local area at all”, option K: “Place of worship for my faith or religion, such as a church, mosque, temple” was treated differently. In the Paper questionnaire, if a respondent didn’t provide an answer where they should have, rather than being coded to a -5, they were instead coded into answer code 5: “Not applicable I do not have a religion/faith” (an answer code which was only available for this facility/amenity).