DESNZ Public Attitudes Tracker: technical report, Winter 2024 to Summer 2025 (accessible webpage)
Published 15 December 2025
Summary
The DESNZ Public Attitudes Tracker (PAT) survey measures public awareness, attitudes and behaviours relating to topics such as climate change, net zero, energy sources, energy infrastructure, and bills.
Due to departmental restructuring in February 2023, responsibility for the survey switched from the Department for Business, Energy and Industrial Strategy (BEIS) to the Department for Energy Security and Net Zero (DESNZ). From Summer 2023, the survey moved from a quarterly to a triannual design with waves conducted every Spring, Summer and Winter.
This technical report covers methodological information about the three triannual PAT survey waves completed in Winter 2024, Spring 2025 and Summer 2025.
The report includes information on the:
- Data collection model
- Sampling approach
- Questionnaire structure and development process
- Fieldwork method and performance
- Data processing approach
- Weighting design
- Reporting outputs
- Changes between waves
Survey objectives
The DESNZ Public Attitudes Tracker helps build a better understanding of public awareness, attitudes and behaviours relating to DESNZ policies in order to provide robust and reliable evidence for policy development, and to see how these measures shift over time. Data is collected from a representative sample of the UK population, so that the results fairly represent the views of the wider population.
The main objectives of the PAT are:
- To provide departments with attitudinal data on DESNZ priorities and policies, such as Net Zero
- To capture public attitudes, behaviours, and awareness and to monitor whether and how these change over time
- To understand how energy and climate policies affect the UK population and different demographic groups
- To provide robust evidence for early policy development and emerging energy concepts
Understanding public attitudes and awareness is essential in developing effective and targeted policies. Findings from this work help DESNZ stay abreast with where the public are in relation to the Department’s priorities and can perform a high-level evaluative and communication purpose. Owning public attitudes data also allows the Department to respond effectively to research published by external stakeholders.
Background
This technical report covers three waves of the Public Attitudes Tracker: Winter 2024 (7 November to 12 December 2024), Spring 2025 (17 March to 22 April 2025) and Summer 2025 (8 July to 13 August 2025). These three waves build upon the previous eleven waves conducted in the new PAT series delivered by the research agency Verian[footnote 1]. A brief explanation of the differences between the previous PAT series and the new series is provided below.
Previous tracker survey series
The PAT began in March 2012 within the Department for Energy and Climate Change (DECC), later part of the Department for Business, Energy and Industrial Strategy (BEIS) in 2016. The survey, which was conducted quarterly, ran 37 waves from 2012 to 2021. Until March 2020, data was gathered through in-home interviews on Kantar Public UK’s face-to-face omnibus, using random location quota sampling.
With the onset of Covid-19, the methodology shifted to Kantar Public’s online omnibus from March 2020 to March 2021, making direct comparisons between face-to-face and online data infeasible due to the change in methodology[footnote 2]. The online panel methodology was set up as an interim methodology given its limitations in terms of sample representativeness and potential panel conditioning. More detail can be found in the Autumn 2022 to Summer 2023 technical report.
New tracker survey series
In Summer 2021, BEIS recommissioned the survey with the aim of creating a new time series based on a methodology which will allow more robust tracking of measures over the longer-term. This was in the context of continued uncertainty about the feasibility of face-to-face data collection.
The new survey series, beginning in Autumn 2021, uses Address Based Online Surveying (ABOS), a cost-effective method of surveying the general population using random sampling techniques. ABOS is a ‘push to web’ methodology where the primary method of data collection is online, but respondents are also able to complete a paper version of the questionnaire which enables participation among the offline population. Full details of the ABOS methodology are covered in the section ‘Details of the data collection model’.
Comparisons with previous tracker series
It should be noted that changes in methodology can lead to both selection effects (that is differences due to the different sampling methods employed) and measurement effects (that is differences due to the different survey modes). Although attempts have been made to reduce the selection effects between surveys, the results from the new time series spanning the fourteen waves from Autumn 2021 to Summer 2025 should not be directly compared with previous waves where data was collected either face-to-face (waves 1 to wave 33) or via an online panel (waves 33 to 37)[footnote 3].
When it comes to measurement effects, differences in results could be caused by a number of factors[footnote 4]. Measurement effects cannot be ameliorated by weighting, although it is sometimes possible to estimate their direction and scale and (at least partially) account for them in analysis.
Results from the Autumn 2021 to Summer 2025 surveys are comparable with one another. While it is possible that the switch from BEIS to DESNZ branding from the Spring 2023 wave onwards could have had some impact on the trends recorded by the survey series, it appears that any such effects are minimal. The ‘Questions added and removed’ section outlines minor differences in the survey in each of the three most recent PAT survey waves, but these are not of a magnitude which would undermine the new time series.
Interpretation of findings
In the published reports for the new PAT series, differences between groups are reported at the 95% confidence interval level (that is, the difference is statistically significant at the .05 level). Further information about significance testing is provided later in this Technical Report. For the key research and statistical term definitions, refer to ‘Appendix C - Research and statistical term definitions’
It should be noted that there may be volatility in regional results due to a low base size for each region in isolation. This also applies to other variables that have low base sizes.
Details of the data collection model
Address Based Online Surveying (ABOS) is a type of ‘push-to-web’ survey method.
The basic ABOS design is simple: a stratified random sample of addresses is drawn from the Royal Mail’s postcode address file and an invitation letter is sent to each one, containing username(s) and password(s) plus the URL of the survey website. Up to four individuals per household can log on using this information and complete the survey as they might any other web survey. Once the questionnaire is complete, the specific username and password cannot be used again, ensuring data confidentiality from others with access to this information.
It is usual for at least one reminder to be sent to each sampled address and it is also usual for an alternative mode (usually a paper questionnaire) to be offered to those who need it or would prefer it. It is typical for this alternative mode to be available only ‘on request’ at first. However, after nonresponse to the web survey invitation, this alternative mode may be given more prominence.
Paper questionnaires ensure coverage of the offline population and are especially effective with sub-populations that respond to online surveys at lower-than-average levels. However, paper questionnaires have measurement limitations that constrain the design of the online questionnaire and also add considerably to overall cost. For the DESNZ PAT, paper questionnaires are used in a limited and targeted way, to optimise rather than maximise response.
Sampling
Sample design: addresses
The address sample design was intrinsically linked to the data collection design (see the section ‘Contact procedures’) and was designed to yield a respondent sample that is representative with respect to geography, neighbourhood deprivation level, and age group. This approach limits the role of weights in the production of unbiased survey estimates, narrowing confidence intervals compared with other designs.
Multiple master samples were drawn to cover different combinations of waves. The first master sample file covered the Winter 2024 wave, while the second covered the Spring 2025 wave. The third sample file covered the Summer 2025 and the Winter 2025 waves. Response expectations for the Spring 2025 and Summer 2025 waves were revised using response data from the Winter 2024 wave.
The principles underpinning the sample design remained the same throughout and are described below.
First, a stratified master sample of addresses in the UK was drawn from the Postcode Address File (PAF) ‘small user’ subframe. Before sampling, the PAF was stratified by International Territorial Level 1 (ITL1) region (12 strata) and, within region, by neighbourhood deprivation level (5 strata). A total of 60 strata were constructed in this way. Furthermore, within each of the 60 strata, the PAF was sorted by (i) local authority, (ii) super output area, and finally (iii) by postcode. This ensured that the master sample of addresses was geographically representative within each stratum.
Each master sample of addresses was augmented by data supplier CACI[footnote 5]. For each address in the master sample, CACI added the expected number of resident adults in each ten-year age band. Although this auxiliary data will have been imperfect, Verian’s investigations have shown that it is reasonably effective at identifying households that are mostly young or mostly old. Once this data was attached, the master sample was additionally stratified by expected household age structure based on the CACI data: (i) all aged 35 or younger (14% of the total); (ii) all aged 65 or older (22% of the total); (iii) all other addresses (64% of the total).
From each master sample, Verian drew three stratified random sub-samples to cover three waves of the PAT. One in five of these addresses was allocated to a reserve pool, while the remaining addresses in each master sample that were not assigned to any wave formed a wider reserve pool. The conditional sampling probability in each stratum was varied to compensate for (expected) residual variation in response rate that could not be ‘designed out’, given the constraints of budget and timescale. The underlying assumptions for this procedure were updated wave by wave as evidence accumulated.
In total, 66,689 addresses were issued across the three waves covered by this report: 17,604 in Winter 2024 (with an additional reserve sample of 4,399), 22,048 in Spring 2025, and 22,638 in Summer 2025). For more information on fieldwork see ‘Fieldwork numbers and response rates’.
Table 1 shows the issued sample structure with respect to the major strata, combining all three surveys together[footnote 6].
Table 1: Address issue by area deprivation and household age structure: Winter 2024, Spring and Summer 2025 surveys
| Expected household age structure | Most deprived | 2nd | 3rd | 4th | Least deprived |
|---|---|---|---|---|---|
| All <=35 | 4,013 | 3,620 | 2,719 | 1,880 | 1,420 |
| Other | 10,120 | 9,434 | 7,750 | 8,409 | 7,578 |
| All >=65 | 1,851 | 1,924 | 2,088 | 1,974 | 1,909 |
Sample design: individuals within sampled addresses
All resident adults aged 16 and over were invited to complete the survey. In this way, the PAT avoided the complexity and risk of selection error associated with remote random sampling within households.
However, for practical reasons, the number of logins provided in the invitation letter was limited. The number of logins varied between two and four, with this total adjusted in reminder letters to reflect household data provided by prior respondent(s). Addresses that CACI data predicted contained only one adult were allocated two logins; addresses predicted to contain two adults were allocated three logins; and other addresses were allocated four logins. The majority of addresses were given either two or three logins. Paper questionnaires were available on request to those who are offline, not confident online, or unwilling to complete the survey this way. Furthermore, some addresses were sent a paper questionnaire at the initial point of contact – further details are provided in the ‘Contact procedures’ section.
Questionnaire
Questionnaire design
The starting point for developing questions at each wave was to review previous questions from the new PAT survey series which ran from Autumn 2021 to Summer 2024, maintaining consistency unless issues with the previous questions had been identified or the policy context had changed. New questions were also developed to address any new policy priorities.
In the current PAT survey series, all questions need to be designed for mixed-mode surveying, with questions suitable for both online and paper-based presentation. The main considerations when designing questions are set out below.
Mixed mode adaptation
The aim is to ensure that questions are presented consistently across modes to avoid mode effects and to ensure that data collected from the two modes can be merged. The starting principle is to design questions to be ‘unimodal’, that is to use a standard question format across modes. However, in some cases, and especially where the question format was non-standard, we took an ‘optimode’ approach to designing questions. This refers to a more flexible approach where the design is optimised to suit mode, but also ensuring consistency in outputs. The documented questionnaires indicate where question formatting or routing differs by mode.
The main mode-based considerations were as follows:
- Question order and routing were aligned between modes by ordering questions in a way which provided simple navigation on paper. Routing instructions were added explicitly on the paper version. However, in some cases, the filter was widened for postal respondents (for example widened to ‘ask all’ with an added ‘not applicable’ option) which avoided the need for complex visual routing. Where this occurred, data editing was later applied to ensure equivalence across modes.
- Grid-style questions often required the need for different presentation by mode, with paper-based questions set up as more traditional static grids, while online these were presented more dynamically to better suit navigation on laptop, tablet and smartphone screens.
- Where a question required a long list (for example more than 12 items), this was retained as a long list on paper, but for online this was split into two or more lists to better suit web-based presentation.
- All response lists were presented in a fixed order (as opposed to randomised or rotated) to ensure mode equivalence for online and paper.
Use of scales
Where scales are used across different items in the questionnaires (for example, 5-point scales for knowledge/awareness, agree/disagree and support/oppose) these are standardised to ensure consistent presentation throughout.
Demographic questions
Wherever possible, these were based on ONS harmonised[footnote 7] versions of questions at the time of setting up the first wave of the new survey series.
Cognitive testing
Cognitive interviewing helps to identify problems in question wording and any words or phrases that are open to misunderstanding or misinterpretation. It does this through assessing the thought processes that respondents go through when trying to answer a question.
Cognitive testing was used to test and refine proposed new questions before adding them to the tracker. In some cases, cognitive testing was used to retest existing questions to check they still made sense or if they required updated answer codes. Cognitive testing was conducted in advance of each wave.
The cognitive testing ahead of Winter 2024 wave included questions on Great British Energy, community-owned local energy projects, the Clean Power 2030 plan and electricity network infrastructure.
The cognitive testing ahead of the Spring 2025 wave included questions on the UK’s Net Zero strategy, low-carbon heating systems, the Clean Power 2030 plan, flexible energy reward schemes, and energy bills.
The cognitive testing in advance of the Summer 2025 wave included questions on Great British Energy, renewable energy infrastructure and community benefits, main heating systems in the home, energy security and changes made to the home to use energy more efficiently.
Each round of cognitive testing involved 10 interviews with adults aged 16+ spread across relevant demographics such as age, gender, region, education level, and tenure. An additional demographic quota was added for the cognitive testing in advance of the Summer 2025 wave to include participants from urban and rural areas. Interviews were carried out by members of the project team and other researchers trained in cognitive testing techniques.
Questionnaire structure
As far as possible at each wave, repeated questions are included with a similar placement, and with a similar preceding context, to minimise context effects. Triannual (formerly quarterly) questions are always asked at the beginning of the survey (after the opening demographics) to ensure that these are not impacted by other questions which may affect knowledge or attitudes towards these key topics.
A list of survey topics and the waves where these are included are provided in Summary of questions included in the DESNZ/BEIS Public Attitudes Tracker since Autumn 2021. Questions are broken down by theme, in line with the coverage in each topic-based report.
The full questionnaires from each survey wave are published alongside the survey results for each wave.
Fieldwork
Contact procedures
All sampled addresses are initially sent a letter inviting them to take part in the survey. Letters are sent by 2nd class franked mail in a white C5 window envelope. The envelope has an ‘On His Majesty’s Service’ logo printed on it. As discussed further below, some envelopes also include paper questionnaires, giving the respondent the option either to complete the survey online or by filling in paper questionnaires.
The number of online survey logins provided to each address varied between two and four. Each advance letter included a description of the survey content, which varied between waves as follows:
Each wave covers a range of topics that will inform key decisions made by the government and other public sector organisations:
Winter 2024: Great British Energy, nuclear energy, heating in the home and low carbon heating systems, energy tariffs, electricity networks and use, and climate change.
Spring 2025: Climate change, energy sources, renewables, nuclear energy, energy bills, Small Modular Reactors and new technologies.
Summer 2025: Climate change, net zero, energy bills, use of appliances at home, energy infrastructure.
The letter also contained the following information:
- The URL of the survey website (https://www.patsurvey.co.uk) and details of how to log in to the survey
- A QR code that can be scanned to access the online survey
- Log-in details for the required number of household members (up to four)
- An explanation that participants will receive a £5 gift voucher
- Information about how to contact Verian in case of any queries
- The reverse of the letter featured responses to a series of Frequently Asked Questions[footnote 8]
- Those whose envelopes do not include paper questionnaire(s) are told that they may request paper questionnaires
- Those whose envelopes do include postal questionnaire(s) are told that they may either complete the survey online or by filling in the enclosed paper questionnaire
A privacy notice is also provided for those whose envelopes include a postal questionnaire. For those completing the survey online, the privacy notice can be found by navigating to https://www.patsurvey.co.uk/, clicking ‘Click here to complete the survey’ and viewing the ‘Privacy policy’ hyperlink at the bottom of the page.
For details on past changes to the invitation letters, refer to the Technical Report for Autumn 2022 to Summer 2023 (Waves 5 to 8), under the section ‘Refinements to the invitation letters’ in the Appendix.
An example of an invitation letter sent out in the Summer 2025 wave can be found published alongside this document.
Table 2 summarises the contact design within each stratum, showing the number of mailings and type of each mailing: push-to-web (W) or mailing with paper questionnaires included alongside the web survey login information (P). For example, ‘WP’ means an initial push-to-web mailing without any paper questionnaires followed by a second mailing with paper questionnaires included alongside the web survey login information.
The five-week timescale of each wave of the PAT – as well as the available budget – limits the maximum number of mailings to each address to two, a fortnight apart. There was also a limit on the number of mailings that included a paper questionnaire alternative. They were included in one of the mailings to sampled addresses where the CACI data indicated that every resident would be aged 65 or older. These addresses comprised 13% of the sampled total.
Table 2: Data collection design by stratum (Area deprivation quintile group and Expected household age structure)
| Expected household age structure | Most deprived | 2nd | 3rd | 4th | Least deprived |
|---|---|---|---|---|---|
| All <=35 | WW | WW | WW | WW | WW |
| Other | WW | WW | WW | WW | WW |
| All >=65 | WP | P | P | P | P |
A weaker response in the Winter 2024 wave necessitated some revision to the release of a small reserve sample. While the initially issued (‘main’) sample and the first reserve sample followed the original plan (see Table 2), the second reserve sample required slight adjustments, as outlined in Table 2b. A small proportion of households scheduled to receive paper questionnaires as per contact design (n = 117) were excluded from this reserve due to insufficient time to post, complete and return the questionnaires before the survey deadline.
Table 2b: Data collection design by stratum (Winter 2024, second reserve sample issue, n = 4,399)
| Expected household age structure | Most deprived | 2nd | 3rd | 4th | Least deprived |
|---|---|---|---|---|---|
| All <=35 | W | W | W | W | W |
| Other | W | W | W | W | W |
| All >=65 | W | W | W | W | W |
Fieldwork performance
Fieldwork dates
Fieldwork for each wave of the survey is run over a period of approximately five weeks. Table 3 summarises the specific fieldwork dates for each survey wave.
Table 3: Fieldwork dates
| Wave | Fieldwork dates |
|---|---|
| Winter 2024 | 7 November to 12 December 2024 |
| Spring 2025 | 17 March to 22 April 2025 |
| Summer 2025 | 8 July to 13 August 2025 |
Fieldwork numbers and response rates
A target of 3,250 completed surveys is set for each wave of the PAT. This is sufficient for all whole population analyses and for most single-dimension subpopulation analyses. Table 4 shows the maximum confidence intervals for direct estimates for a 3,250 sample, assuming a design factor of 1.2: the average for W1-W8. There remains some variability in response rate from wave to wave.
Table 4: Maximum confidence intervals for % estimates based on two sample sizes
| 3,250 per wave (deft = 1.2) | |
|---|---|
| 100% (top level) | +/-2.1% pts |
| 50% (e.g., by sex) | +/-2.9% pts |
| 20% (e.g., by age group) | +/-4.6% pts |
| 10% (e.g., by sex/age group) | +/-6.5% pts |
| 5% (rare subpopulations) | +/-9.2% pts |
Both the Spring and Summer 2025 waves exceeded their fieldwork targets, with Spring surpassing the target by 5% and Summer by 9%. In contrast, the Winter 2024 wave achieved 99% of its target sample size.
Table 5 summarises the sample sizes by data collection method at each wave.
Table 5: Unweighted sample sizes by data collection method
| Wave | Total unweighted sample sizes (adults aged 16+) | CAWI completes | CAWI completes as percentage of total wave completes | Paper completes | Paper completes as percentage of total wave completes |
|---|---|---|---|---|---|
| Winter 2024 | 3,214 | 2,742 | 85% | 472 | 15% |
| Spring 2025 | 3,414 | 2,882 | 84% | 532 | 16% |
| Summer 2025 | 3,531 | 2,654 | 75% | 877 | 25%[footnote 9] |
| Total | 10,159 | 8,278 | 81% | 1,881 | 19% |
In total, there were 10,159 respondents across all three waves, a conversion rate (responses/issued addresses) of 15%.
This can be converted into an individual level standardised response rate of 8.7% if it is assumed that (i) 92% of sampled addresses are residential, and (ii) an average of 1.89 adults live in the residential addresses. These assumptions are well-evidenced in general but not known with certainty for the particular sample that was drawn.
Using the same assumptions, the standardised household response rate (at least one response) was 12.6%. On average, 1.41 responses were received from each responding household.
Incentives
Each respondent who completes the survey receives a £5 gift voucher incentive. Those who complete a postal questionnaire are mailed a £5 Love2shop voucher. Respondents who complete the survey online are able to claim their voucher via the online incentives platform, serviced by Merit (a third party e-voucher provider), which allows respondents to choose from a range of vouchers.
Survey length
Table 6 shows the average (median) time taken to complete the survey in each wave, based on those completing the survey online. Timings for those completing the paper version of the survey are not available – however, the questionnaire content for both data collection methods is largely mirrored and completion lengths are likely to be broadly similar.
Table 6: Median length of each survey wave for those completing online
| Wave | Median survey length |
|---|---|
| Winter 2024 | 15 minutes and 21 seconds |
| Spring 2025 | 15 minutes and 17 seconds |
| Summer 2025 | 16 minutes and 18 seconds |
Response Burden
The GSS has a policy of monitoring and reducing statistical survey burden to participants where possible, and the burden imposed should be proportionate to the benefits arising from the use of the statistics.
Our approach to measuring response burden is by multiplying the number of responses to the survey in each wave by the median time spent completing the survey per wave (Table 6).
The response burden of the latest three waves which fulfilled the quality check standards was estimated at 2,651 hours. This assumes that the median survey completion time for postal questionnaires was the same as the median completion time for online questionnaires. This is a reduction compared with the three waves covered in the previous technical report (Winter 2023, Spring and Summer 2024), when the total response burden was estimated at 2,824 hours.
Data processing
Data management
Due to the different structures of the online and paper questionnaires, data management was handled separately for each mode. Online questionnaire data were collected via the web script and, as such, were much more easily accessible. By contrast, paper questionnaires were scanned and converted into an accessible format.
For the final outputs, both sets of survey data were converted into IBM SPSS Statistics, with the online questionnaire structure as a base. The paper questionnaire data was converted to the same structure as the online data so that data from both sources could be combined into a single SPSS file.
Quality checking
Initial checks were carried out to ensure that paper questionnaire data had been correctly scanned and converted to the online questionnaire data structure.
Once any structural issues had been corrected, further quality checks were carried out to identify and remove any invalid surveys. To do this, a range of ‘potential invalid survey’ flags were created and applied to the data.
Any cases that were allocated a duplicate flag, a super-speeding flag, an extreme straightlining flag, or a flag which indicates that they were missing the ‘confirmation of accuracy’, were immediately removed.
Any cases allocated three or more of the other flags were also removed. So, for example, a case which had a minimum survey completes flag, plus a missing demographic flag, plus a moderate straightlining flag, would also be removed.
The quality checks are as seen in Table 7.
Table 7: Potential invalid survey checks
| Type | Process |
|---|---|
| Duplicate on Individual Serial | Check for duplicate serials. Manually review the flagged cases and decide whether it is a duplicate based on demographics, email addresses used to claim Merit incentives and respondent name. If so, flag for removal. Otherwise attach a new, unique serial. |
| Minimum survey completes | Flag households where there are more survey completes than the minimum reported number of people in that household (different respondents from the same household may report a different number of household members). |
| Maximum survey completes | Flag households where there are more survey completes than the maximum reported number of people in that household (different respondents from the same household may report a different number of household members). Manually review these flagged households and decide whether there are any duplicates based on demographics, email addresses used to claim Merit incentives and respondent name. Flag any duplicates for removal. |
| Super speeding | Allocate a super speeding flag to any survey completes with a length of less than 30% of median time to complete and remove from dataset. |
| Moderate Speeding | Allocate a moderate speeding flag to survey completes which took longer than 30% of median time to complete but were still in the lowest 10th percentile of survey length. |
| Missing demographic information | Only for PAPI questionnaires. Attach a missing demographic flag if more than one variable is missing from: ageband; gender; numadults; ethnic; and tenure. |
| Moderate straightlining of grids | Apply a moderate straightlining flag if more than half of the answered grids have been straightlined (i.e., the same response code is given for each item in the grid). |
| Extreme straightlining of grids | Apply an extreme straightlining flag if all answered grids were straightlined and remove from dataset. |
| Have not ticked the “confirmation of accuracy” box | Flag for removal if a CAWI respondent has not typed in their name to verify that ‘I confirm that all of my answers were given honestly and represent my own views’. Flag for removal if a PAPI respondent has not signed to verify that ‘I confirm that I answered the questions as accurately as possible and that the answers reflect my own personal views’. |
The following number of invalid cases was identified in each survey wave:
- Winter 2024: 243 invalid cases (7.0% of all cases)
- Spring 2025: 230 invalid cases (6.3% of all cases)
- Summer 2025: 214 invalid cases (6.0% of all cases)
Data checks and edits
Upon completion of the general quality checks described above, more detailed data checks were carried out to ensure that the right questions had been answered according to questionnaire routing. Unless a programming error has been made, this is correct for all online completes, as routing is programmed into the scripting software. However, data edits were required for paper completes. Data is also checked against the raw topline data outputs and checks are also implemented to verify that any weighting has been correctly applied.
There were three main types of data edits for paper questionnaire data:
- If a paper questionnaire respondent had mistakenly answered a question that they weren’t supposed to, their response in the data was allocated a ‘SYSMIS’ value.
- If a paper questionnaire respondent had neglected to answer a question that they should have, they were assigned a response in the data of “-4: Not answered (Paper)”.
- If a paper questionnaire respondent selected multiple responses to a single-coded question, their answers to that question were excluded from the data and they were instead allocated a response in the data of “-5: Multiple options chosen (Paper)”.
Other minor edits were made on a question-specific basis, to ensure that there were no mutually exclusive combinations of responses for paper completes (for example, ‘none of these’ being recorded alongside a specific response code).
Coding
Post-survey completion coding was undertaken by the Verian coding department. The coding department coded any verbatim responses recorded in ‘other specify’ questions.
If the open-ended response corresponded to one of the pre-coded categories for a given question, the coding team would reallocate the open-ended response to the relevant pre-coded category and the response was removed from the ‘other’ category.
A new response code is added to the reported data when at least 1% of respondents in a wave provide open-ended answers that can be meaningfully grouped together (approximately 33 for a target wave of 3,250). Three new codes were introduced based on the open text responses in the Winter 2024 wave, while two new codes were introduced based on the open text responses in the Spring 2025 wave (see the new additional codes in ‘Winter 2024’ and ‘Spring 2025’). However, this threshold was not reached in the Summer 2025 wave, so no new codes were added.
Data outputs
Once the checks were complete a final SPSS data file was created that only contained valid survey completes and edited data. Individual SPSS data files were created for each of the three PAT waves from Winter 2024 to Summer 2025.
Based on these SPSS datasets, data tables in an Excel format were produced for each PAT wave. There are no combined wave databases for the current PAT series.
Key sub-group reporting variables
The variables which are the main focus of sub-group reporting in the PAT survey series cover a range of demographic and profiling measures. These are created using a consistent specification in each wave, as outlined in ‘Appendix B – Sub-group reporting variable specification’.
Weighting
PAT data was weighted separately for each survey wave.
The PAT is largely used to collect data at the person-level but there are a small number of questions where the respondent is asked about the household as a whole or is asked to give an opinion on a household-level matter. The details of these two types of weights are provided below.
Individual weight
A three-step weighting process was used to compensate for differences in both sampling probability and response probability:
Step 1: An address design weight was created equal to one divided by the sampling probability; this also served as the individual-level design weight because all resident adults could respond.
Step 2: The expected number of responses per address was modelled as a function of data available at the neighbourhood and address levels. The step two weight was equal to one divided by the predicted number of responses.
Step 3: The product of the first two steps was used as the input for the final step to calibrate the sample. The responding sample was calibrated to the contemporary Labour Force Survey (LFS)[footnote 10] with respect to (i) sex by age, (ii) educational level by age, (iii) ethnic group, (iv) housing tenure, (v) region, (vi) employment status by age, (vii) the number of co-resident adults, and (viii) internet use by age.[footnote 11]
The statistical efficiency of the individual-level weights was 60% (Winter 2024), 56% (Spring 2025), and 56% (Summer 2025)[footnote 12] . This corresponds to design effects of approximately 1.67, 1.79, and 1.79 respectively. Effective sample sizes were therefore approximately 1,975, 1,980, and 1,925 for the three waves.
It should be noted that the weighting only corrects for observed bias (for the set of variables included in the weighting matrix) and there is a risk of unobserved bias. Furthermore, the raking algorithm used for the weighting only ensures that the sample margins match the population margins. There is no guarantee that the weights will correct for bias in the relationship between the variables.
Finally, because the methodology employs a random sampling technique, the weighting procedure is different from those used for the face-to-face surveys (up to wave 33) and online panel surveys (waves 33-37) in the original PAT series. However, the objective to eliminate sample bias was the same.
Household weight
The household weight is used for questions which are best interpreted at a household level, for example factual questions such as main method of heating the home, and whether the household has a smart meter.
The full list of household-weighted variables is:
- HEATMAIN
- SMARTMET
- BILLPAY
Note, household weights were not used in the Autumn 2022 and Spring 2023 surveys. The COOLMAIN variable was household-weighted in Winter 2021, but from Winter 2022 onwards, it was person-weighted.
To analyse household-level survey data, it makes sense to convert the weighted sample of adults aged 16+ into a weighted sample of households.
This was achieved in two steps:
Step 1: The person-level weight of each respondent was divided by the reported number of adults aged 16+ in that respondent’s household (that is, the number of survey-eligible residents). This provisional weight was used as the input weight for step 2.
Step 2: A household-level calibration procedure was carried out using the contemporary LFS household-level dataset as the benchmark. Household totals were obtained for (i) housing tenure, (ii) region, (iii) the number of adults aged 16+ in the household, and (iv) the number of children aged under 16 in the household.
This approach to constructing the household-level weight has the advantage of making use of data from all respondents. The unweighted base is therefore the same for both person-level and household-level estimates. However, multiple respondents reporting about the same household are likely to provide very similar answers. The practical consequence is that the statistically effective sample size for household-level estimates will be smaller than for person-level estimates, even if the unweighted base is the same.
Reporting and data
Data delivery
Any respondent-level data is transmitted using Kiteworks software, which provides a highly secure and trackable means of transferring sensitive data. Kiteworks employs AES-256 encryption at rest and TLS 1.2 encryption when the data is in transit.
Reporting outputs
The following reporting outputs were published for the PAT waves from Winter 2024 to Summer 2025:
- Individual topic reports covering results grouped thematically for each wave, for example, ‘Net Zero and Climate Change’ or ‘Energy Infrastructure and Energy Sources’, are available in PDF format. These reports are also available in HTML format, in line with DESNZ’s commitment to improving accessibility.
- A technical overview of the methodology for each wave is available in PDF format, with an HTML version.
- A single version of the questionnaire that outlines the content for both online and paper questionnaires for each wave (available in PDF format).
- Tabulations showing time series for questions asked triannually (formerly quarterly), biannually and annually, where these questions have been included more than once in the survey (available in XLSX format).
- Tabulations of key questions from each wave, cross-tabulated by gender, age, highest qualification, and region (available in XLSX format).
Anonymised datasets are in the process of being deposited to the UK Data Service.
Significance testing
Significance testing is a statistical process which shows whether differences between sub-samples, or over time, are likely to be real (as opposed to being an artefact of the confidence intervals which are usually inherent in any sample survey).
Significance tests were applied throughout the reporting of the current PAT series and the commentary in the published reports focused on those differences which were statistically significant at the 95% confidence level.
The significance tests for any comparisons (for example, comparing response data for men with response data for women) were automatically conducted within the Wincross software package which Verian uses to produce data tabulations.
The software uses a column proportions test which looks at the rows of a table independently and compares pairs of columns, testing whether the proportion of respondents in one column is significantly different from the proportion in the other column.
The column proportions test is performed separately for each relevant pair of columns within each relevant row. The tests were conducted at the 95% confidence level and the weighted base size and unrounded percentages were used as the input values for within-wave significance testing.
Summary of changes
Questions added and removed
Below is a list of changes to the questionnaire compared to previous versions. These include the removal of certain questions, the addition of new topics, and modifications to existing questions to better align with the evolving needs of DESNZ.
Winter 2024
New questions:
- Two new questions were added about awareness and knowledge of Great British Energy:
- GBEKNOW, ‘The UK government has set up a publicly owned, clean energy company, called Great British Energy. Great British Energy will operate in all four nations of the UK. Before today, how much, if anything, did you know about Great British Energy?’.
- GBEFUNCTIONA-C, ‘Now a few statements about what people think might be the purpose of Great British Energy. For each one, please state whether you think the statement is true, false or if you are unsure either way. This is not a test so don’t worry if you don’t know. We want to find out the level of understanding across the country as a whole.
- A. Great British Energy will supply electricity and gas to your home.
- B. Great British Energy will own, manage and operate clean power projects (for example, wind farms).
- C. Great British Energy will conduct research into new renewable technologies.
- A new question was added about energy infrastructure:
- INFRA2IMP, ‘Please now imagine that new electricity network infrastructure such as pylons, overhead power lines, and substations is planned for your local area. What would be the most important information that you would like to know about during the planning and consultation stage? Please select up to the three responses.’
- Two new questions were added about the UK role as a global leader in climate policy:
- ICFKNOW, ‘How much do you agree or disagree that the UK is a global leader in tackling climate change?’
- ICFIMPORT, ‘How important or unimportant do you feel it is that the UK is a global leader on tackling climate change?’
Adapted questions:
- Minor wording and/or formatting changes:
- HEATMAIN
- Code 13 ‘None’ was removed.
- INFRAKNOW
- Code 2 was updated to ‘Hardly anything but I’ve heard of this’.
- CCHEARD
- Code 3 was updated to ‘Social media (for example, Facebook, Tik Tok, Instagram, YouTube, X (formerly Twitter), Reddit)’.
- HEATMAIN
- Addition of new codes.
- LCNOWHY
- A new code, ‘Concerns for the environment’, was added from the open text data collected in ‘Other’.
- TRUSTHEAT
- A new code, ‘Conducting my own research (for example, online searches, reading reviews)’, was added from the open text data collected in ‘Other’.
- WHYNOSMARTT
- A new code, ‘It’s not suitable for me to use energy off-peak’ was added from the open text data collected in ‘Other reason’.
- LCNOWHY
Spring 2025
New questions:
- A new question was introduced about the government’s goal to generate at least 95% of electricity in Great Britain from clean sources by 2030.
- CLEANPOWER2030, ‘One of the government’s goals is to achieve Clean Power by 2030. This means generating at least 95% of electricity in Great Britain from clean sources, like wind and solar, by 2030. Before today, how much, if anything, did you know about this?’
- Two new questions were introduced about awareness and likelihood of using flexible energy reward schemes.
- FLEXENERGYAWARE, ‘Some energy suppliers offer customers an opportunity to change their electricity use to certain times to receive rewards, such as money off their energy bills, free electricity or vouchers. These are referred to as flexible energy reward schemes. Suppliers use different names for these schemes, including Saving Sessions, PeakSave and Demand Flexibility Service. Before today, how much, if anything, did you know about flexible energy reward schemes?’
- FLEXENERGYUSE, ‘How likely are you to use a flexible energy reward scheme?’.
Adapted questions:
-
Minor wording and/or formatting changes.
- GENDER
- Code 3, was updated to ‘Identify in another way’.
- NUMADULTS
- Codes were condensed and renamed:
- 1
- 2
- 3 or more
- Codes were condensed and renamed:
- CHILDHH
- Codes were condensed and renamed:
- 0
- 1
- 2
- 3 or more
- Codes were condensed and renamed:
- INCOMEBAND
- Codes were condensed and renamed:
- £0-£14,999
- £15,000-£29,999
- £30,000-£44,999
- £45,000 or more
- 98. Don’t know
- 99. Prefer not to say
- Renamed the variable as INCOMEBAND2.
- Codes were condensed and renamed:
- GENDER
- NEWTECHTRUST
- Formatting changed from multiple option to grid.
- Code 3 was updated to ‘Social media (for example, YouTube, Facebook, Tik Tok, Instagram, X (formerly Twitter), Reddit).
- NUCWHYNO
- The wording ‘and decommissioning nuclear power stations’ was removed from code 10 as the question refers specifically to the construction of a nuclear power station.
- SOLARWHYNO was renamed as SOLPANWHYNO.
- Addition of new codes.
- SOLPANWHYNO
- A new code, ‘The payback period would be too long’’, was added from the open text data collected in ‘Other reason’.
- WINDWHYNO
- A new code, ‘I’m concerned about the noise’, was added from the open text data collected in ‘Other reason’.
- SOLPANWHYNO
Annual questions moved to triannual frequency:
Questions relating to awareness and likelihood of installing air source and ground source heat pumps typically asked in Winter waves were reintroduced as regular repeat questions in Spring and Summer 2025.
- LCHEATKNOW1
- LCHEATKNOW2
- LCHEATINSTALLA
- LCHEATINSTALLB
Questions removed:
- Some questions were removed from the closing demographic section.
- COHAB
- JOBEVER
- EMPSE
Summer 2025
New questions:
- A new question was added about the benefits that increase support for local renewable energy infrastructure:
- RENEWBENEFIT, ‘Which, if any, of the following might make you more likely to support the construction of renewable energy infrastructure such as solar and wind farms in your local area?’
- Two formerly annual questions on local renewable energy infrastructure were included and changed into biannual questions. This was to allow more frequent tracking and to act as context before asking the RENEWBENEFIT question:
- WINDFARM, ‘Now imagine that there are plans for an onshore wind farm to be constructed in your local area. How happy or unhappy would you be about this? If you already have this in your local area, answer on the basis of how you feel about this now.’
- SOLARFARM, ‘Now imagine that there are plans for a solar panel farm to be constructed in your local area. How happy or unhappy would you be about this? If you already have this in your local area, answer on the basis of how you feel about this now.’
Adapted questions:
-
Minor wording and/or formatting changes.
- ENSECCONCERN: The formatting changed to group statements 1-4 under the same question.
Removed questions:
-
Some questions were removed as either no longer relevant to policy context or as they included outdated definitions.
- GBEFUNCTIONA_C
- LOWCARBKNOW
- ENSUFFIC2Y, ENCHANGE2Y, ENCHREASON, GOVSUPPORTEN, ENSECCONCERN statement 5
Appendix A – Definition of terms used in questionnaires
The table below sets out the key terms used within the questionnaires and gives a brief definition for each term.
| Term | Definition |
|---|---|
| Carbon capture and storage (CCS) | Carbon capture and storage is a technology that stops greenhouse gases entering the atmosphere. It typically involves capturing carbon dioxide (CO2) emissions from power stations or industrial facilities where emissions are high. The CO2 is then piped to offshore underground storage sites, where it can be safely and permanently stored. |
| Clean Power 2030 | Clean Power 2030 is a UK government initiative aiming to ensure that at least 95% of electricity in Great Britain is generated from clean sources (such as wind, solar, and other low-carbon technologies) by the year 2030. The initiative was formalised in the Clean Power 2030 Action Plan, published by the Department for Energy Security and Net Zero (DESNZ) in December 2024. |
| Climate change / Global warming | Long-term shift in the planet’s weather patterns and rising average global temperatures. |
| Energy infrastructure | A term used to capture a range of different energy sources that are covered by the survey and the interconnections between them. This includes a range of renewable sources (on-shore and off-shore wind, solar, wave and tidal, and biomass), nuclear and carbon capture and storage. |
| Energy Performance Certificate (EPC) | An Energy Performance Certificate (EPC) measures the energy efficiency of a property and is needed whenever a property is built, sold or rented. The certificate includes recommendations on ways to improve the home’s energy efficiency. |
| Energy tariffs | The pricing plan for energy used (e.g., for electricity and gas). |
| Energy security | Energy security relates to the uninterrupted availability of energy sources at an affordable price and the associated impacts of these factors on national security. |
| Flexible energy reward schemes | Flexible energy reward schemes are offered by some energy suppliers and provide customers with an opportunity to receive rewards (such as money off their energy bills, free electricity or vouchers) by changing their electricity use to certain times. |
| Fusion Energy | Fusion energy is an experimental technology that works by fusing together atoms in order to release energy. The UK is exploring whether this technology could be used to generate zero carbon electricity. |
| Great British Energy | Great British Energy is a publicly owned clean energy company established by the UK government in 2025. It operates in all four nations of the UK. |
| Greenhouse Gas Removal (GGR) | These are methods that remove greenhouse gases such as carbon dioxide from the atmosphere to help tackle climate change. The purpose of GGRs is to help achieve net zero in the UK by 2050, balancing out emissions from industries such as air travel and farming, where eliminating greenhouse gas emissions will be more challenging. GGRs can be based on natural approaches. However, they can also be based on engineered approaches. Engineered approaches use technology to remove greenhouse gases from the environment and store them permanently, for example offshore in underground storage sites. |
| Hydrogen | Hydrogen is used as a fuel in some industrial processes. It is not naturally available, which means it needs to be produced from other sources, such as natural gas, nuclear power, or renewable power like solar and wind, to be used as a fuel. When produced in an environmentally friendly way, hydrogen can help reduce the carbon emissions in industries, power generation, heavy transport (such as buses, lorries, shipping and aircraft) and potentially home heating. |
| Low carbon heating systems | Heating systems that use energy from low-carbon alternatives such as hydrogen, the sun, or heat pumps which draw heat from the ground, air or water to heat homes. |
| Net Zero | Net Zero means that the UK’s total greenhouse gas (GHG) emissions would be equal to or less than the emissions the UK removed from the environment. This can be achieved by a combination of emission reduction and emission removal. A new Net Zero target was announced by the Government in June 2019, which requires the UK to bring all greenhouse gas emissions to Net Zero by 2050. |
| Nuclear Energy | Nuclear power is the use of nuclear reactions to produce electricity. This source of energy can be produced in two ways: fission – when nuclei of atoms split into several parts; or fusion – when nuclei fuse together. Fission is the process which occurs in nuclear power stations across the UK. Fusion is an experimental technology which the UK is exploring as a possibility to produce zero carbon electricity. |
| Renewable energy | Renewable energy technologies use natural energy resources that are constantly replaced and never run out to make electricity. Fuel sources include wind, wave, biomass and solar. |
| Small Modular Reactors | These are a type of nuclear fission reactor, similar to existing nuclear power stations, but on a smaller scale. They can be used for electricity generation, to provide industry with heat and power, or to provide energy to UK communities not connected to the national gas grid. |
| Smart appliances | Smart appliances are normal household appliances that have built in features enabling them to connect to the internet. This allows them to be controlled and monitored remotely using a smart phone or tablet. Smart appliances can be scheduled to come on at certain times. They can also be linked to smart meters to come on during periods of low electricity prices. This can help to lower customer bills and also manage demand on the electricity grid. Examples of smart appliances include: smart kitchen appliances, smart thermostats to control heating, and other appliances such as smart electric vehicle chargers. |
| Smart meters | Smart meters are a type of gas and/or electricity meter which automatically send meter readings to your energy supplier and usually come with a monitor or screen (digital in-home display), that provides information about your energy usage. Smart meters also allow prepayment customers to top up their credit online and over the phone. |
| Time of use tariffs | Time-of-use tariffs are energy pricing structures offered by some suppliers that charge lower “off-peak” rates during times of lower demand (typically at night or certain times of the day) and higher “peak” rates during periods of high demand. These tariffs can help consumers reduce their electricity bills if they are able to adjust their energy usage to align with the cheaper off-peak periods. |
Appendix B – Sub-group reporting variable specification
| Top level grouping | Detailed grouping | Definition |
|---|---|---|
| Gender | Male | GENDER=1 |
| Gender | Female | GENDER=2 |
| Gender | Prefer to self-describe | GENDER=3 |
| Age | 16 to 24 | AGE>=16 AND <=24 OR AGEBAND=1 AND 2 |
| Age | 25 to 34 | AGE>=25 AND <=34 OR AGEBAND=3 |
| Age | 35 to 44 | AGE>=35 AND <=44 OR AGEBAND=4 |
| Age | 45 to 54 | AGE>=45 AND <=54 OR AGEBAND=5 |
| Age | 55 to 64 | AGE >=55 AND <=64 OR AGEBAND=6 |
| Age | 65+ | AGE>= 65 OR AGEBAND=7 AND 8 |
| Highest qualification | Degree level or above | HIGHQUAL=1 |
| Highest qualification | Another kind of qualification | HIGHQUAL=2 |
| Highest qualification | No qualifications | HIGHQUAL=3 |
| Tenure | Owner | TENURE=1,2,3 |
| Tenure | Renter | TENURE=4 |
| Rental type | Social renter | LANDLORD=1,2 |
| Rental type | Private renter | LANDLORD=7 |
| Rental type | Other type of renter | LANDLORD=3,4,5,6 |
| Property type | A house or bungalow | ACCOMTYPE_COMB = 1,2,3 |
| Property type | Flat | ACCOMTYPE_COMB = 4,5,6,7 |
| Property type | Other | ACCOMTYPE_COMB = 8,9 |
| GOR | North East | GOR=1 |
| GOR | North West | GOR=2 |
| GOR | Yorkshire & Humber | GOR=3 |
| GOR | East Midlands | GOR=4 |
| GOR | West Midlands | GOR=5 |
| GOR | East of England | GOR=6 |
| GOR | London | GOR=7 |
| GOR | South East | GOR=8 |
| GOR | South West | GOR=9 |
| GOR | England | GOR=1,2,3,4,5,6,7,8,9 |
| GOR | Wales | GOR=10 |
| GOR | Scotland | GOR=11 |
| GOR | Northern Ireland | GOR=12 |
| Number of adults in household | 1 | NUMADULTS=1 |
| Number of adults in household | 2 | NUMADULTS=2 |
| Number of adults in household | 3+ | NUMADULTS>=3 |
| Number of children in household | None | CHILDHH=1 |
| Number of children in household | 1 | CHILDHH=2 |
| Number of children in household | 2+ | CHILDHH>=3 |
| Household decision maker | Respondent | HHRESP=1 OR NUMADULTS=1 |
| Household decision maker | Joint | HHRESP = 3 |
| Household decision maker | Someone else | HHRESP=2 |
| Current working status | Working full time (30+ hours a week) | WORKSTAT=1 |
| Current working status | Working part time (less than 30 hours a week) | WORKSTAT=2 |
| Current working status | Unemployed and available for work | WORKSTAT=6 |
| Current working status | Wholly retired from work | WORKSTAT=7 |
| Current working status | Full-time education at school, college or university | WORKSTAT=8 |
| Current working status | Looking after home or family | WORKSTAT=9 |
| Current working status | Permanently sick or disabled | WORKSTAT=10 |
| Current working status | Other | WORKSTAT=3, 5, 11 |
| Ethnicity | White | ETHNIC=1,2,3,4 |
| Ethnicity | Mixed or multiple ethnic groups | ETHNIC=5,6,7,8 |
| Ethnicity | Asian or Asian British | ETHNIC=9,10,11,12,13 |
| Ethnicity | Black or Black British | ETHNIC=14,15,16 |
| Ethnicity | Other ethnic group | ETHNIC=17,18 |
| NS-SEC* | Managerial, administrative, and professional occupations | (OCCUPATION=1,8 AND EMPSTATUS=1,2,3,4,5,6,7) OR (OCCUPATION=2 AND EMPSTATUS=1,4,5,6) OR (OCCUPATION=3,7 AND EMPSTATUS=1,4,5,6,7) OR (OCCUPATION=4 AND EMPSTATUS=1,4,5) OR (OCCUPATION=5,6 AND EMPSTATUS=1,4,5) |
| NS-SEC* | Intermediate occupations | (OCCUPATION=2 AND EMPSTATUS=7) |
| NS-SEC* | Small employers and own account workers | (OCCUPATION=2 AND EMPSTATUS=2,3) OR (OCCUPATION=3,7 AND EMPSTATUS=2,3) OR (OCCUPATION=4 AND EMPSTATUS=2,3) OR (OCCUPATION=5,6 AND EMPSTATUS=2,3) |
| NS-SEC* | Lower supervisory and technical occupations | (OCCUPATION=4 AND EMPSTATUS=6,7) OR (OCCUPATION=5,6 AND EMPSTATUS=6) |
| NS-SEC* | Semi-routine and routine occupations | (OCCUPATION=5,6 AND EMPSTATUS=7) |
| NS-SEC* | Never worked | JOBEVER=2 |
| Mode | CAWI | CAWI_PAPI = 1 |
| Mode | PAPI | CAWI_PAPI = 2 |
| Urban/Rural Classification | Urban | Urban_Rural_Class=1 |
| Urban/Rural Classification | Rural | Urban_Rural_Class=2 |
| Coastal/Inland Classification | Coastal | Coastal_Inland =1 |
| Coastal/Inland Classification | Coastal Adjacent | Coastal_Inland =2 |
| Coastal/Inland Classification | Inland | Coastal_Inland =3 |
| IMD Decile | 1 | IMDDecile=1 |
| IMD Decile | 2 | IMDDecile=2 |
| IMD Decile | 3 | IMDDecile=3 |
| IMD Decile | 4 | IMDDecile=4 |
| IMD Decile | 5 | IMDDecile=5 |
| IMD Decile | 6 | IMDDecile=6 |
| IMD Decile | 7 | IMDDecile=7 |
| IMD Decile | 8 | IMDDecile=8 |
| IMD Decile | 9 | IMDDecile=9 |
| IMD Decile | 10 | IMDDecile=10 |
| Annual Personal Income | £0 - £14,999 | INCOMEBAND=1 |
| Annual Personal Income | £15,000 - £29,999 | INCOMEBAND=2 |
| Annual Personal Income | £30,000 - 44,999 | INCOMEBAND=3 |
| Annual Personal Income | £45,000+ | INCOMEBAND=4 |
| Financial Hardship | Finding it fine | FINHARD=1,2 |
| Financial Hardship | Just about getting by | FINHARD=3 |
| Financial Hardship | Finding it difficult | FINHARD=4,5 |
*Note: The NS-SEC variable was discontinued from Summer 2024 onwards as the employment related questions were reduced in the survey to lessen the burden on respondents. These questions were replaced with more relevant questions, including financial hardship and income.
The IMD Decile, Annual Personal Income and Financial Hardship subgroups were added in Spring 2024 following the introduction of the INCOMEBAND and FINHARD demographic variables, and the Coastal/Inland classification was added in Summer 2024.
Appendix C – Research and statistical term definitions
The table below sets out the key terms used within this report and gives a brief definition for each term.
| Term | Definition |
|---|---|
| ABOS (Address Based Online Surveying) | A ‘push to web’ survey methodology where letters are sent to a sample of home addresses inviting household members to complete the survey online. However, householders are also given the option to complete a paper version of the questionnaire which enables participation among the offline population. |
| Base | The number of people answering a survey question. In the PAT, the base number varies slightly between questions asked to equivalent subgroups. This is because of the ABOS methodology which includes a mixture of online and paper responses. On paper it is possible to leave a question blank or answer multiple responses at a single-coded question; in these situations, the answers are removed from the overall base. |
| CAWI | Computer-assisted web interviewing. |
| Fieldwork | The period of time over which data are collected for a survey (whether by face-to-face interviews, online completions or paper-based questionnaire completions). |
| NS-SEC* | National Statistics Socio-Economic Classification. The PAT survey uses the self-coded method of deriving NS-SEC which classifies people into six categories: 1. Managerial, administrative and professional occupations 2. Intermediate occupations 3. Small employers and own account workers 4. Lower supervisory and technical occupations 5. Semi-routine and routine occupations 6. Never worked |
| Omnibus survey | A method of quantitative survey research where data on a variety of subjects submitted by a range of funders is collected during the same interview. |
| Privacy notices | Information provided by a service provider to inform users how they will use their personal information. |
| Random location quota sampling | A hybrid form of sampling that combines elements of quota sampling and random probability sampling. The principal distinguishing characteristic of random location sampling is that interviewers are given very little choice in the selection of respondents. A random sample of geographical units is drawn (usually postcode sectors) and respondents in each interviewer assignment are then drawn from a small set of homogenous streets within these. Quotas are set in terms of characteristics which are known to have a bearing on individuals’ probabilities of being at home and so available for interview. Rules are given which govern the distribution spacing and timing of interviews. |
| Representativeness | Similarity of the sample profile to benchmark population statistics, such as the Office for National Statistics mid-year population estimates. |
| Sample size | The number of people included in the sample (a subset of the population). |
| Statistical significance | A statistical test to determine whether relationships observed between two survey variables are likely to exist in the population from which the sample is drawn. We only report on findings that are statistically significant at the 95% level. |
*Note: The NS-SEC variable was discontinued from Summer 2024 onwards as the employment related questions were reduced in the survey to lessen the burden on respondents. These questions were replaced with more relevant questions, including financial hardship and income.
Contact information
Responsible statistician: Graeme Stephens
Email: PAT@energysecurity.gov.uk
Media enquiries: 020 7215 1000; newsdesk@energysecurity.gov.uk
Public enquiries: 020 7215 5000
-
Following Kantar Public’s divestment from the Kantar Group and subsequent rebranding to Verian, all references to Kantar Public, including logos, were updated to Verian in Spring 2024. This change was reflected across the online questionnaire, paper questionnaire, survey website, and invitation and reminder letters. In this technical report, we will refer to it as Verian. ↩
-
Fieldwork in March 2020 was conducted in two stages. The survey was initially run on the Kantar Public face-to-face Omnibus but stopped early due to the outbreak of COVID-19 and the start of the lockdown. The findings, based on a truncated face-to-face sample, were published in May 2020. ↩
-
https://www.gov.uk/government/statistics/beis-public-attitudes-tracker-wave-33. The remainder of Wave 33 was conducted on the Kantar Public online omnibus to trial the online omnibus approach and to compare the results with the face-to-face survey to understand mode effects. ↩
-
For the list of measurement effects, please refer to the Autumn 2022 to Summer 2023 technical report. ↩
-
A modelled age profile based on a range of sources including Census and Electoral Roll data. ↩
-
In addition, higher sampling fractions were applied to the three least populous ITL1 regions (NE England, Wales and N Ireland) so that the expected number of completed questionnaires was at least 220 in each one. ↩
-
https://analysisfunction.civilservice.gov.uk/government-statistical-service-and-statistician-group/gss-support/gss-harmonisation-support/harmonised-standards-and-guidance/ ↩
-
The respondent FAQs, provided with each letter, remained consistent across all three waves. The only updates were changes made from Spring 2024 onwards, following Kantar Public’s divestment from the Kantar Group and rebranding to Verian. Specifically, the name was updated from ‘Kantar Public’ to ‘Verian’ under the ‘Who is conducting the survey?’ section, and the helpline email address was changed from ‘patsurvey@kantar.com’ to ‘patsurvey@veriangroup.com’ under the ‘What do you need to do?’ section. To view the website version of the FAQs, visit https://www.patsurvey.co.uk/faq.html. ↩
-
The increase in paper completes in Summer 2025 has led to an increase in the unweighted share of over 65s when compared to Spring 2025. This increase in over 65s is addressed through weighting to ensure the number of respondents in each category under 65s remains comparable across waves. The weighted age distribution in Summer 2025 closely aligns with previous waves and is based on updated population benchmarks. ↩
-
July-September 2024 for the Winter 2024 survey; October-December 2024 for the Spring 2025 survey; and April-June 2025 for the Summer 2025 survey. ↩
-
Internet use by age was based on LFS data from January-March 2021, as this data is only collected in these months. This release has now been discontinued, so the 2021 data was retained for all three surveys. ↩
-
The statistical efficiency is the size of the effective sample size as a proportion of the actual sample size, taking only weighting into account (i.e., ignoring the effects of sample stratification and clustering by household). ↩