Official Statistics

Community Life Survey October - December 2023: Technical report

Published 8 May 2024

Applies to England

1. Introduction

1.1 Background to the survey

The Community Life Survey has been conducted by Verian (formerly Kantar Public) [footnote 1] on behalf of the Department for Culture, Media and Sport (DCMS) since 2012.  The Community Life Survey is a nationally representative annual survey of adults (16+) in England that aims to track the latest trends and developments across areas that are key to encouraging social action and empowering communities. More information and historical data and reports can be found on gov.uk [footnote 2]

Verian has been commissioned to deliver the Community Life Survey for 2023/24 and 2024/25. These survey years have been commissioned by DCMS in partnership with the Department for Levelling Up, Housing, and Communities (DLUHC), and employ a boosted sample approach (n=175,000) to enable production of reliable statistics at lower-tier local authority level, and incorporate questions relating to pride in place and life chances. This aims to inform cross-government work on these issues, including DLUHC’s evaluation of the UK Shared Prosperity Fund . More information and supporting documents on this can be found on gov.uk [footnote 3], with findings to be published in due course.

The scope of the survey is to deliver a nationally representative sample of adults (aged 16 years and over) in England. The data collection model for the Community Life Survey is based on Address-Based Online Surveying (ABOS), a type of ‘push-to-web’ survey method. Respondents take part either online or by completing a paper questionnaire. 

In 2023/24 the sample consists of approximately 175,000 interviews across two quarters of fieldwork (October-December 2023, and January-March 2024). Condensing the survey into two quarters was necessary due to delays in the commissioning of the research, and the overarching needs of the Community Life Survey and connected research. Previous waves of both the Community Life Survey and the Participation Survey have been run across fewer than four quarters, and there was no discernible impact on the time series data, so it was felt the same would apply here. Most of the times series questions ask the respondent for their recall over “the last 12 months” which should have mitigated for seasonality effects in responses.

This technical note relates to the Quarter 1 fieldwork, conducted between 30th October 2023 and 15th January 2024.  [footnote 4]  

1.2 Survey objectives

  • To provide robust, nationally representative data on behaviours and attitudes within communities that can be used to inform and direct policy and action in these areas.

  • To provide a key evidence source for policy makers in government, public bodies, voluntary, community and social enterprise (VCSE) sector organisations and other external stakeholders.

  • To underpin further research and debate on building stronger communities.

  • To support government understanding of pride in place and life chances, and to inform the evaluation of place-based interventions and assessment of levelling up missions [footnote 5].

In preparation for the main survey launching in October 2023, Verian undertook questionnaire development work of various elements of the new design, more details of which can be found in Chapter 3.

1.3 Survey design

The basic ABOS design is unchanged for 2023/24: a stratified random sample of addresses is drawn from the Royal Mail’s postcode address file (PAF) and an invitation letter is sent to each one, containing username(s) and password(s) plus the URL of the survey website. Sampled individuals can log on using this information and complete the survey as they might any other web survey. Once the questionnaire is complete, the specific username and password cannot be used again, ensuring data confidentiality from others with access to this information. 

The survey design included an alternative mode of completion in the form of a paper questionnaire. Initially this was available only on request, but up to two copies were included in the first or second reminder letter for a proportion of the sampled addresses. More details on this can be found in the Contact Procedures section.

Paper questionnaires ensure coverage of the offline population and are especially effective with sub-populations that respond to online surveys at lower-than-average levels. However, there are limitations in the use of paper questionnaires: 

  • The physical space available on paper for questions

  • The level of complexity that can be used in routing from one question or section to another

  • The length of time a person is willing to spend completing a paper questionnaire

  • The cost of administering a paper questionnaire compared to an online one 

  • The difficulty of incorporating a modular system of questions within a paper questionnaire design

For these reasons, the Community Life Survey, uses paper questionnaires in a limited and targeted way, to optimise rather than maximise response.

2. Sampling

2.1 Sample design: addresses

The address sample design is intrinsically linked to the data collection design (see ‘Details of the data collection model’ below) and was designed to yield a respondent sample that is as representative as possible of the adult population within each of the 309 lower tier or unitary local authorities in England.

The design sought a minimum two-quarter respondent sample size of 500 within each local authority and 2,720 within each ITL2 region [footnote 6]. The actual targets varied between local authorities (from 500 to 2,675) and between ITL2 regions (from 2,720 to 12,090). This variation maximised the statistical efficiency of the national sample while also accommodating the local and regional sample size requirements. Although there were no specific targets per fieldwork quarter, the sample selection process was designed to ensure that the respondent sample size per local authority and ITL2 region was approximately the same per quarter.

As a first step, a stratified master sample of just over 906,000 addresses in England was drawn from the PAF ‘small user’ subframe. Before sampling, the PAF was disproportionately stratified by local authority (309 strata) and, within local authority, the PAF was sorted by (i) neighbourhood deprivation level (5 groups), (ii) super output area, and finally (iii) by postcode. This ensured that the master sample of addresses was geodemographically representative within each local authority.

This master sample of addresses was then augmented by data supplier Consolidated Analysis Centers, Inc. (CACI). For each address in the master sample, CACI added the expected number of resident adults in each ten-year age band. Although this auxiliary data will have been imperfect, Verian (formerly Kantar Public)’s investigations have shown that it is highly effective at identifying households that contain older people. Once this data was attached, the master sample was additionally coded with expected household age structure based on the CACI data: (i) all aged under 65; (ii) at least one aged 65 or older.

Verian drew a stratified random sample of 646,926 addresses from the master sample of >906,000 and systematically allocated them with equal probability to quarters 1 and 2 (323,463 addresses per quarter). 

The sampling probability of each address in the master sample was determined by the expected number of completed questionnaires from that address [footnote 7] given the selected data collection design: where this was lower than the average, the sampling probability was higher than the average, and vice versa. By doing this, Verian compensated for any (expected) variation in response rate that could not be fully ‘designed out’, given the constraints of budget and timescale. The underlying response assumptions were derived from empirical evidence obtained from the 2020-21 and 2021–22 Community Life Surveys and the 2022-23 Participation Survey (a survey with a similar design also commissioned by DCMS and carried out by Verian).

After allocating the sample of 646,926 addresses to quarters 1 and 2, Verian then systematically distributed the quarter-specific samples to three equal-sized ‘replicates’, each with the same profile. The replicates were expected to be issued several weeks apart, to ensure that data collection was spread throughout the three-month period allocated to each quarter. 

These replicates were further subdivided into twenty-five equal-sized ‘batches’, each comprising a little over 4,300 addresses. This process of sample subdivision into small batches was intended to help manage fieldwork. The expectation was that only the first twenty batches within each replicate would be issued (that is, approximately 86,250 addresses), with the last five batches kept back in reserve. 

Verian ’s plan was to review fieldwork outcomes at a local authority level (i) before the third replicate in quarter 1 was issued, (ii) before the first replicate in quarter 2 was issued, and finally (iii) before the third replicate in quarter 2 was issued. At each review point, Verian intended to recalculate the number of small batches to issue per local authority per subsequent replicate. This review process was designed to maximise the probability of achieving each of the local authority targets.

However, as a result of the delay in the commencement of fieldwork for quarter 1 until late October, review (i) was carried out after the third replicate was issued. This was because the contraction of fieldwork in quarter 1 did not allow enough time for accurate assessment of the impact of reminder mailings to the first and second replicates before the third was issued. Therefore, no revisions were made to the quarter 1 sample issue. In total, 258,770 addresses were issued for quarter 1.

2.2 Sample design: individuals within sampled addresses

All resident adults aged 16 or over were invited to complete the survey. In this way, the Community Life Survey avoided the complexity and risk of selection error associated with remote random sampling within households. 

However, for practical reasons, the number of logins provided in the invitation letter was limited. The number of logins was varied between two and four, with this total adjusted in reminder letters to reflect household data provided by prior respondent(s). Addresses that CACI data predicted contained only one adult were allocated two logins; addresses predicted to contain two adults were allocated three logins; and other addresses were allocated four logins. The mean number of logins per address was 2.8. Paper questionnaires were available to those who are offline, not confident online, or unwilling to complete the survey online.

2.3 Details of the data collection model

Four different data collection designs were used for the Community Life Survey in 2023-24. Each has a code reference that shows the number of mailings and type of each mailing: push-to-web (W) or mailing with paper questionnaires (P). For example, ‘WWP’ means two push-to-web mailings and a third mailing with up to two paper questionnaires included alongside the web survey login information. In general, there was a two-week gap between mailings. For quarter 1, the allocation into the four data collection designs was as follows:

Data collection design  Allocation (%)  Allocation (n=)
Quarter 1 total  100% 258,770
WW 13% 34,234
WP 3% 8,526
WWW 67% 173,078
WWP 17%  42,932

Only addresses coded by CACI as containing somebody aged 65+ could be allocated to the WP or WWP designs (i.e., receive a mailing with paper questionnaires included). Each of these ‘older household’ addresses had a 75% probability of being allocated to one of these designs. This targeted approach was based on historical data Verian has collected through other studies, which suggests that provision of paper questionnaires to all addresses can actually displace online responses in some areas. 

Other than that, addresses were allocated to whichever data collection design would be expected to yield a mean of at least 0.35 completed questionnaires. If the expected yield under a two-mailing design was under 0.35, the address was allocated to a three-mailing design instead.

3. Questionnaire

3.1 Questionnaire development

The online questionnaire was designed to take an average of 30 minutes to complete. A modular design was used with approximately three-quarters of the questionnaire made up of a core set of questions asked of the full sample, and the remaining questions split into three separate modules randomly allocated to a subset of the sample. Applying a modular design to the survey created space for the Pride in Place and Life Chances questions.

The postal version of the questionnaire was designed, as far as possible to be equivalent to the online version. However, there are some limitations to this, namely:

  • Space – How many questions can reasonably fit into a paper version of the questionnaire within printing limits

  • Time – Avoid overly burdening respondents and keep within a time limit to encourage response

  • Budget constraints – It was not possible to produce multiple versions of the paper questionnaire, so the modular content was removed

  • Complexity – The survey already contained variants in design related to contact methods and volumes, online and paper versions, and question modules (online only). Introducing different versions of the paper questionnaire was felt to be too challenging

The paper questionnaire includes all questions published as part of the annual statistical release, questions that are of critical importance to policy clients, the Pride in Place and Life Chances questions, and demographic questions used in analysis or weighting. The question wording used in both the online and paper versions is as closely matched as possible, and in total the paper questionnaire covers around 50% of the material in the online questionnaire. 

Copies of both the online and paper questionnaires can be found in Appendix B and C respectively.

3.2 Questionnaire changes

Substantial changes were made to the questionnaire for the 2023/24 Community Life Survey in order to incorporate the new content relating to Pride in Place and Life Chances, whilst remaining within the 30-minute target questionnaire length of the online survey.  In addition, there were some amendments to existing CLS content, as required.  DCMS and DLUHC consulted with internal colleagues and stakeholders to agree the additions and amendments required. Questions on the following topics of interest were added:

  • Proximity, frequency of use, and levels of satisfaction with local amenities.

  • Pride in Place, which included questions on respondents’ sense of belonging and pride of their local area, the role culture plays in choosing where to live, and the current arts and culture scene in their local area.

  • Anti-social behaviour – does it occur in a local area, what types of anti-social behaviour are being experienced, and whether these incidents are witnessed

  • Additional job-related questions including hours worked, proximity to work location(s), confidence in seeking work

  • General questions on life skills, life satisfaction, and financial situation     

  • Internet use overall and for particular activities; general screen time

Paper survey - 

The following question topics were removed from the questionnaire:

  • Importance of mixing with people from other backgrounds

  • Depth of information on charitable giving

  • Detailed follow-up on employment, business ownership, and reasons for not actively searching for employment

The paper questionnaire for 2023/24 differed to the online version by not including the following content (for reasons listed above):

  • Depth of information on members of household

  • Demographic profile of friendship groups

  • Availability of support networks

  • Detail on involvement in groups, clubs, or organisations

  • Detail on voluntary work and charitable giving

  • Detail on local area involvement

  • Availability of children’s and library services in local area

In some instances, it was necessary to explore what impact removing some measures from the paper questionnaire would have on the times series. An analysis was conducted to examine the difference between estimates from 2021/22 when web-only weights were applied to them. For the majority of these, the differences were negligible (<0.5 percentage points). For three questions FrndRel3, FrndRel4 and VolBen2 , the differences were 1-3 percentage points.  As a result FrndRel1-4 was retained in the paper questionnaire to ensure there was continuity of the time series data. It was decided not to include VolBen2 in the paper survey as it only differed on one answer code and it is not a core question. 

Additionally, consideration was given to the GSS harmonised social capital measures and how they differed from the existing equivalent CLS measures. Analysis of the two sets of measures found that there were some differences[footnote 8]. Accordingly, decisions were made as to which measures should retain the existing question and which should be updated to the GSS harmonised version.  In all cases bar one, the existing question was retained as the analysis suggested changing it would impact the time series data. The exception was the question assessing the extent to which people can be trusted (STrustGen2) was updated to the GSS harmonised version as it was felt this would not impact the time series.

Full details of the content of the 2023/24 online and paper questionnaires, including additions, amendments and removals, can be found in Appendix A.  

3.3 Cognitive testing

Given the extent of questionnaire changes in the 2023/24 Community Life Survey whilst being aware of the constraints within the project timetable, Verian undertook two stages of cognitive testing in July and August 2023. Cognitive testing explores how participants understand, process, and respond to survey questions.  This testing was primarily focused on the Pride in Place and Life Chances questions, and their alignment with existing CLS content, in addition to further testing of the understanding of ‘local area’. 

A full report of the cognitive testing will be published as part of the 2023/24 annual technical report.

4. Fieldwork

4.1 Contact procedures

All selected addresses were sent an initial invitation letter containing the following information: 

  • A brief description of the survey

  • The URL of survey website (used to access the online script)

  • A QR code that can be scanned to access the online survey

  • Login details for the required number of household members

  • An explanation that participants will receive a £10 shopping voucher for completing the survey

  • Information about how to contact Verian in case of any queries or to request a paper questionnaire

  • Frequently asked questions, and responses to these (including how to access the survey privacy notice)      

All partially or non-responding households were sent one reminder letter at the end of the second week of fieldwork. A further targeted second reminder letter was sent to households for which, based on Verian’s ABOS field data from previous studies, this was deemed likely to have the most significant impact (mainly deprived areas and addresses with a younger household structure). The information contained in the reminder letters was similar to the invitation letters, with slightly modified messaging to reflect each reminder stage.

Details of the provision of paper questionnaires, including their content and allocation, can be found above.

4.2 Fieldwork performance

In total, 103,307 respondents completed the survey during Quarter 1: 92,788 via the online survey and 10,519 by returning a paper questionnaire. Following data quality checks (see Chapter 5 for details), 5,863 (5.7%) respondents were removed (within the expected range of removals which is between 5-6%), leaving 97,444 respondents in the final dataset.  

This constitutes a 39.9% conversion rate, a 28.9% household-level response rate, and an individual-level response rate of 22.9% (CAWI level response rate = 19.4% & PAPI level response rate = 2.24%)[footnote 9]

For the online survey, after the data quality check removals, the average completion time was 27:52 and the median completion time was 25:44.

4.3 Incentive System

As a thank you for taking part, all respondents that completed the Community Life Survey were eligible to receive an incentive voucher worth £10. 

Online incentives

Participants completing the survey online were provided with details of how to claim their voucher at the end of the survey and were directed to the voucher website, where they could select from a range of different vouchers, including electronic shopping vouchers sent via email, credit with a payments service, or a charitable donation.

Paper incentives

Respondents who returned the paper questionnaire were also provided with a £10 shopping voucher. This voucher was sent in the post and could be used at a variety of high street stores.

5. Data processing

5.1 Data management

Due to the different structures of the online and paper questionnaires, data management was handled separately for each mode. Online questionnaire data was collected via the web script and, as such, was much more easily accessible. By contrast, paper questionnaires were scanned and converted into an accessible format.

For the final outputs, both sets of interview data were converted into IBM SPSS Statistics, with the online questionnaire structure as a base. The paper questionnaire data was converted to the same structure as the online data so that data from both sources could be combined into a single SPSS file.

5.2 Partial completes

Online respondents can exit the survey at any time, and while they can return to complete the survey at a later date, some chose not to do so. Equally respondents completing on paper will occasionally leave part of the questionnaire blank, for example if they do not wish to answer a particular question or section of the questionnaire. 

Partial data can still be useful, providing respondents have answered the substantive questions in the survey. These cases are referred to as usable partial interviews.

Survey responses were checked at several stages to ensure that only usable partial interviews were included. Upon receipt of receiving returned paper questionnaire, the booking-in team removed obviously blank paper questionnaires. Following this, during data processing, rules were set for the paper and online surveys to ensure that respondents had provided sufficient data. 

For the online survey, respondents had to reach a certain point in the questionnaire for their data to count as valid (questions relating to their qualifications). Paper data was judged complete if the respondent answered at least 50% of the questions or reached and answered Q46.

5.3 Quality checking

Initial checks were carried out to ensure that paper questionnaire data had been correctly scanned and converted to the online questionnaire data structure. For questions common to both questionnaires, the SPSS output was compared to check for any notable differences in distribution and data setup.

Once any structural issues had been corrected, further quality checks were carried out to identify and remove any invalid interviews. The specific checks were as follows:

  1. Selecting complete interviews: Any test serials in the dataset (used by researchers prior to survey launch) were removed. Cases were also removed if the respondent reached - but did not answer - the fraud declaration statement (online: QFraud; paper: Q99).

  2. Duplicate serials check: If any individual serial had been returned in the data multiple times, responses were examined to determine whether this was due to the same person completing multiple times or due to a processing error. If they were found to be valid interviews, a new unique serial number was created, and the data was included in the data file. If the interview was deemed to be a ‘true’ duplicate, the more complete or earlier interview was retained.

  3. Duplicate emails check: If multiple interviews used the same contact email address, responses were examined to determine if they were the same person or multiple people using the same email. If the interviews were found to be from the same person, only the most recent interview was retained. In these cases, online completes were prioritised over paper completes due to the higher data quality.

  4. Interview quality checks: A set of checks on the data were undertaken to check that the questionnaire was completed in good faith and to a reasonable quality. Several parameters were used:

  • Interview length (online check only)

  • Number of people in household reported in interview(s) vs number of total interviews from household.

  • Whether key questions have valid answers.

  • Whether respondents have habitually selected the same response to all items in a grid question (commonly known as ‘flatlining’) where selecting the same responses would not make sense.

  • How many multi-response questions were answered with only one option ticked.

5.4 Data checks and edits

Upon completion of the general quality checks described above, more detailed data checks were carried out to ensure that the correct questions had been answered according to questionnaire routing. This was generally all correct for all online completes, as routing is programmed into the scripting software, but for paper completes, data edits were required.

There were two main types of data edit, both affecting the paper questionnaire data:

  1. Single-response questions edits: If a paper questionnaire respondent had mistakenly answered a question that they weren’t supposed to, their response in the data was changed to “-1: Item not applicable”. If a paper questionnaire respondent had neglected to answer a question that they should have, they were assigned a response in the data of “-5: Not answered but should have (paper)”. Where the respondent had selected multiple answers, their response was changed in the data to “-6: Multi-selected for single response (paper)”

  2. Multiple response question edits: If a paper questionnaire respondent had mistakenly answered a question that they weren’t supposed to, their response was set to “-1: Item not applicable”. If a paper questionnaire respondent had neglected to answer a question that they should have, they were assigned a response in the data of “-5[footnote 10]: Not answered but should have (paper)”. Where the respondent had selected both valid answers and an exclusive code such as “None of these”, any valid codes were retained, and the exclusive code response was set to “0”.

5.5 Coding

Post-interview coding was undertaken by members of Verian’s coding department. The coding department coded verbatim responses, recorded for ‘other specify’ questions.

For all ‘other specify’ questions data edits were made to move responses coded to “Other” to the correct response code if the answer could be back coded to an existing response code.

As an example, please see the following question:

RETAILSAT

What are the reasons you are [very/fairly] satisfied with the shops and retailers available in your local area? 

  1. Easy to get to

  2. They have all the basic essentials I need

  3. There are a wide range of goods and services to choose from

  4. Reasonably priced

  5. Independent or locally run 

  6. Some other reason (please type in)

Don’t know

If a respondent selected “Some other reason” at this question and wrote text that said they were satisfied with the shops in their area because they are accessible, in the data they would be back coded to the code “Easy to get to”.

Where “Other” responses could not be back coded to an existing code and where the number of mentions of a particular response was given by at least 2% of those answering the question, new codes were opened to reflect these responses where appropriate – and the relevant responses were coded to these new codes accordingly.

5.6 Data outputs

Once the checks were complete a final SPSS data file was created that only contained valid interviews and edited data. From this dataset, a set of data tables were produced.  The data tables were agreed by DCMS and DLUHC, to cover key questions of interest.  Quarterly tables cover national data only - local authority level data tables will be published as part of the annual 2023/24 reporting (due in autumn 2024). 

5.7 Weighting

A three-step weighting process is used with each quarterly dataset, to compensate for variation within the respondent sample with respect to both sampling probability and response probability:

  1. An address design weight was created equal to one divided by the sampling probability; this also served as the individual-level design weight because all resident adults could respond.

  2. The expected number of responses per address was modelled as a function of data available at the neighbourhood and address levels. The step two weight was equal to one divided by the predicted number of responses.

  3. The product of the first two steps was used as the input for the final step: calibration. The responding sample was calibrated to the 2023 Annual Population Survey (APS) with respect to (i) sex by age, (ii) educational level by age, (iii) ethnic group, (iv) housing tenure, (v) ITL1 region, (vi) employment status by age, and (vii) household size.

The combined (two-quarter) 2023-34 dataset will be further calibrated to ensure that the sex/age distribution within each local authority broadly matched that of the most recent mid-year population estimates published by ONS.

An equivalent weight was also produced for the (majority) subset of respondents who completed the survey by web. This weight was needed because some items were included in the web questionnaire but not the paper questionnaire.

It should be noted that the weighting only corrects for observed bias (for the set of variables included in the weighting matrix) and there is a risk of unobserved bias. Furthermore, the rating algorithm used for the calibration only ensures that the sample margins match the population margins. There is no guarantee that the weights will correct for bias in the relationships between the variables.

The final weight variables in the dataset are:

  • ‘Finalweight’ – to be used when analysing data available from both the web and paper questionnaires.

  • ‘Finalweightweb’ – to be used when analysing data available only from the web questionnaire.

  1. Please note that due to a change in ownership in 2022, Kantar Public is now trading as Verian (since November 2023). As fieldwork had already commenced, all materials related to Quarter 1 of the Community Life Survey refer to ‘Kantar Public’. From Quarter 2, the materials will use ‘Verian’. For the purposes of this document, the organisation conducting the research is referred to as ‘Verian’. 

  2. Community Life Survey: all releases 

  3. UK Shared Prosperity Fund: prospectus 

  4. As a result of delays in finalising the survey content, the start of Quarter 1 fieldwork was delayed until 30th October. A technical error with the online survey script allowed people to continue completing the survey until 15th January (planned closing date 2nd January). It was agreed to retain these completed surveys as incentive costs had been paid to the respondents. 

  5. Statement of Levelling Up Missions 

  6. International Territorial Level (ITL) is a geocode standard for referencing the subdivisions of the United Kingdom for statistical purposes, used by the Office for National Statistics (ONS). Since 1 January 2021, the ONS has encouraged the use of ITL as a replacement to Nomenclature of Territorial Units for Statistics (NUTS), with lookups between NUTS and ITL maintained and published until 2023. 

  7. Using data from 2020-22, extensive modelling was carried out to determine the likely response level under various different potential data collection designs and as a function of data that can be attached to all sampled addresses: effectively Census and Census-derived data plus the CACI (modelled) household age structure data. The Census data was compacted into six ‘factor component’ scores that, between them, cover the majority of the between-neighbourhood (output area) variation in Census data. 

  8. This analysis is written up in Appendix E of the CLS 2021/22 Technical Report appendices document (from page 100) which can be found on the main 2021/22 page 

  9. Response rates (RR) were calculated via the standard ABOS method. An estimated 8% of ‘small user’ PAF addresses in England are assumed to be non-residential (derived from interviewer administered surveys). The average number of adults aged 16 or over per residential household, based on the Labour Force Survey, is 1.89. Thus, the response rate formula: Household RR = number of responding households / (number of issued addresses×0.92); Individual RR = number of responses / (number of issued addresses×0.92×1.89). The conversion rate is the ratio of the number of responses to the number of issued addresses.   

  10. There was one exception to this rule. For the question Assets2: “For each of the following, please indicate whether there is at least one within a 15–20-minute walk from your home, further away but still in your local area, or there is not one in your local area at all”, option K: “Place of worship for my faith or religion, such as a church, mosque, temple” was treated differently. In the Paper questionnaire, if a respondent didn’t provide an answer where they should have, rather than being coded to a -5, they were instead coded into answer code 5: “Not applicable I do not have a religion/faith” (an answer code which was only available for this facility).