Policy paper

Department for Energy Security and Net Zero: Monitoring and evaluation framework (accessible webpage)

Published 27 March 2024

Foreword

I am pleased to introduce the first Department for Energy Security and Net Zero Monitoring and Evaluation Framework.

DESNZ is the department with primary responsibility for building a green economy and an energy system that delivers for everyone. Our efforts are directed towards fulfilling the UK’s legally binding commitment to achieve net zero, whilst simultaneously promoting sustainable economic growth. To make this happen, we are speeding up the building of network infrastructure and domestic energy production through increased investment. This will generate jobs and foster growth in new green sectors.

In this time of global uncertainty, our focus also remains on helping households and businesses by ensuring energy security and promoting energy efficiency. The Department aims to implement long-term improvements to our energy markets that will ensure security of supply and benefit consumers through lower energy costs.

The Department’s policies and programmes must pioneer innovative approaches to achieve our net zero target and keep our energy secure. Timely and accurate monitoring and evaluation (M&E) are critical for these initiatives, to ensure accountability of public spending and to guide decision-making by assessing our ongoing performance. Evaluation also serves as the foundation for effective future policy and programme design through providing evidence on the impact and value for money of previous interventions.
This Framework reflects our strong commitment and ambition to deliver a comprehensive programme of monitoring and evaluation. It showcases how we will ensure there is appropriate monitoring and evaluation across key policies and programmes and embed governance processes to maintain a robust and quality assured evidence base. We will strive to facilitate a positive learning culture and explore innovative M&E approaches to provide the most timely and informative evidence.

Jeremy Pocklington CB
Permanent Secretary
Department for Energy Security and Net Zero

Executive Summary

Monitoring and evaluating the delivery and impacts of our interventions [footnote 1] is essential to the work of the Department for Energy Security and Net Zero (DESNZ): to implement our policies as intended, to spend taxpayers’ money wisely, to stand up to external scrutiny, and to build our future evidence base.

As a new organisation, DESNZ must embed a monitoring and evaluation system which informs policy, programming, and strategy across our diverse and nationally integral energy portfolio. The department’s Monitoring and Evaluation Framework has been launched to guide our activities and expectations in this space as well as to demonstrate our public commitment to forming decisions based on robust evidence.

The driving force behind the Framework is our overarching vision: to deliver effective, innovative, and impactful monitoring and evaluation outputs that set the foundation for an ever-improving evidence base for decision-making. To achieve this vision, six core aims have been developed with underpinning actions to support and measure progress.

Aims… Key actions…
To establish a comprehensive, appropriate, and proportionate monitoring and evaluation coverage Requiring well-developed monitoring and evaluation plans to ensure early thinking around evidence requirements from the policy or programme design phase.

Sharing best practice and quality standards internally and with partner organisations.
To firmly embed monitoring and evaluation into governance processes Reviewing and strengthening our monitoring and evaluation governance structures.

Central tracking and oversight of key risks and updates on monitoring and evaluation commitments.
To build capacity and capability in analytical, project delivery and policy professions Providing a comprehensive training offer and internal guidance to support relevant professionals.

Facilitating peer learning through an internal Monitoring and Evaluation Network.
To facilitate a positive learning culture Creating links across policy areas and between appraisal and evaluation.

Taking part in cross-government groups to learn from shared challenges and best practice.
To maintain independent and transparent quality assurance of findings Embedding quality assurance at key points in the monitoring and evaluation lifecycle.

Facilitating external peer review prior to evaluation publication.
To encourage the exploration of new and innovative monitoring and evaluation approaches Maximising the impact and use of innovative and existing data sets.

Seeking to use automated and reproducible approaches to monitoring and analysis.

Maintaining a strong connection with external experts.

Proactively exploiting opportunities from emerging trends.

1. Introduction

The Department for Energy Security and Net Zero (DESNZ) was established as a ministerial department in February 2023 to focus on the energy portfolio of the former Department for Business, Energy and Industrial Strategy (BEIS) with the support of 14 agencies and public bodies.

DESNZ has been tasked with the significant responsibility of securing the UK’s long-term energy supply, ensuring properly functioning energy markets, improving energy efficiency and leading on the transition to net zero. It is crucial that DESNZ can successfully deliver on these tasks despite the complexity and global uncertainty associated with climate change and energy insecurity fuelled by the war in Ukraine.

As bold and complex interventions are required to simultaneously progress towards net zero and develop a long-term supply of sustainable energy, robust evidence is needed to maximise societal benefits and minimise delivery risks and costs. Monitoring and evaluation have a vital role in providing this evidence across all policy areas and at all stages of our policy and programme lifecycles.

Steering the course of our interventions, monitoring can produce early and ongoing information to actively manage performance and, where necessary, inform ‘in-flight’ changes to help ensure that anticipated benefits are realised. Synthesising the evidence on our interventions, evaluation is equally important for accountability, learning from decisions and demonstrating the responsible use of public money.

The purpose of the Framework

This is DESNZ’s first monitoring and evaluation framework. As a new department, DESNZ plans to build upon the increasingly high-quality evidence gathered by its predecessor, BEIS, on the impact of past and existing energy interventions through monitoring and evaluation. Therefore, this Framework is largely grounded in the BEIS M&E framework that was published in 2020. [footnote 2]

It sets out the department’s long-term and strategic commitment to robust and proportionate monitoring and evaluation, covering:

  • What monitoring and evaluation look like in DESNZ, including general guidance on standards and expectations.
  • Our vision for embedding robust monitoring and evaluation evidence into decision-making across the department.
  • Specific actions that the department is undertaking to implement the vision.

2. Monitoring and evaluation in DESNZ

Monitoring and evaluation are distinct but complementary approaches which are grounded in the principles of HM Treasury’s Magenta Book [footnote 3] and Green Book. [footnote 4]

2.1 What does monitoring look like in DESNZ?

Monitoring is the systematic collection of performance data to assess the progress and achievement of policy objectives against set targets and to identify and lift implementation bottlenecks.

OECD [footnote 5]

Monitoring is fundamental to the delivery of the department’s objectives as monitoring data is used to:

  • assess whether an intervention has delivered the target outputs (such as numbers of units installed), outcomes and impacts;
  • demonstrate whether an intervention is reaching its target population;
  • make evidence-based ‘in-flight’ changes to manage performance during delivery and support the realisation of the anticipated benefits;
  • link to other public administrative, private and academic data sets to create richer data sets;
  • enable further research and evaluation by collecting contact details and characteristics of those affected by an intervention and a similar comparison group;
  • understand stakeholders’ perceptions/ attitudes towards an intervention;
  • produce statistics and other transparency publications and answer Freedom of Information requests;
  • enable work on finance, fraud/ error and auditing;
  • inform cost-benefit analysis and determine whether assumptions about an intervention, were correct.

All DESNZ policies and programmes should implement proportionate and good quality monitoring to assess and improve performance and inform learning, ahead of and throughout implementation. Plans for monitoring should be developed during the design phase, so that arrangements can be in place when the intervention begins.

Annex A1 provides two monitoring case studies to illustrate the implementation and uses of monitoring.

2.2 What does evaluation look like in DESNZ?

According to the Magenta Book “evaluation is the systematic assessment of the design, implementation and outcomes of an intervention”. [footnote 6]

There are two primary drivers of policy and programme evaluation in DESNZ: learning and accountability. Learning helps to manage the risk and uncertainty associated with an intervention and its implementation; provides an understanding of what works for whom, how, why, and in what circumstances; and informs design and implementation decisions.

Accountability ensures that DESNZ is transparent with its stakeholders, for example, about how public money has been spent (such as informing Spending Reviews, National Audit Office Reviews, [footnote 7] and the requirements of the Regulatory Policy Committee) [footnote 8], how well an intervention has been targeted, and whether a regulation has an appropriate balance between burden and protections.

A common misconception about evaluation is that it is something that only happens at the end of a policy or programme. In actuality, evaluations can provide crucial information for key decision points while the intervention is active. For that reason, evaluation should be used, in addition to monitoring, when a more comprehensive assessment is required at fixed points to provide detailed evidence on the design, delivery, progress and real-world effects of an intervention.  

To ensure high-quality outputs, DESNZ is committed to following Magenta Book principles [footnote 9] when delivering in-house or externally commissioned evaluations across a range of types (process, impact and value for money) and approaches. All DESNZ evaluation publications are peer-reviewed by external experts and should abide by the GSR protocol. [footnote 10]

To illustrate how evaluation is carried out in practice in DESNZ, Annex A1 includes an example case study of a decarbonisation evaluation which used a mixed method approach based on evidence needs.

2.3 What is the relationship between monitoring, evaluation and benefits management?

Monitoring, evaluation and benefits management have strong, reinforcing relationships.

Benefits management is an iterative programme management approach which aims to ensure that the benefits (desired change) of an intervention have been clearly defined, are measurable and achieved, and that any disbenefits are minimised. [footnote 11] It accompanies financial and cost management and is conducted throughout the whole project lifecycle and into post-delivery.

Monitoring can provide crucial data to understand whether the anticipated benefits of an intervention are being realised (e.g. through collecting and analysing data on the number of smart meters installed against the number initially forecasted – see Case Study 1 in Annex A1 for further detail). In cases involving greater complexity or indirect causal pathways, evaluation can establish whether the anticipated benefits have been achieved (e.g., the level of energy savings being derived from the installation and use of smart meters) as well as ‘how’ and ‘why’ they have been achieved. Evaluation can also provide evidence on the additional and harder to measure impacts of an intervention, over and above the anticipated benefits and disbenefits.

In practice, monitoring, benefits management and evaluation approaches may interact and inform one another in different ways at different stages of the policy cycle. It is therefore important that they are carefully planned at an early stage and implemented in a joined-up manner to maximise opportunities for learning, as demonstrated by the Smart Metering case study in Annex A1.

3. Standards for monitoring and evaluation

Monitoring and evaluation considerations should be built into the design and implementation of our interventions in DESNZ. This is to enable both the intervention and the monitoring and evaluation to be tailored to maximise the potential for robust, useable findings that can help to improve current implementation and future decision-making.

Figure 2: Diagram showing the core expectations of DESNZ monitoring and evaluation

  • Demonstrate learning from previous M&E in the intervention design
    M&E plans produced during policy and programme development should be informed by previous evidence
  • Clarify the intervention objectives and anticipated effects in a Theory of Change
    This enables a common understanding of the proposed causal chain that leads from inputs to impacts
  • Assess what level of M&E is proportionate
    M&E activities must be prioritised to ensure efficiency and value for money
  • Assess what evidence is needed, who will use it and when it will be required
    By understanding the range of requirements, activities can be tailored to generate evidence for decision points
  • Identify the evaluation objectives and questions
    Ensuring that these are clearly defined will help to narrow the evaluation focus and support
  • Identify the appropriate monitoring approach
    All projects and programmes should undertake monitoring of appropriate indicators and metrics
  • Identify the appropriate evaluation approach
    Approaches should be considered on a case-by-case basis and chosen to best reflect evidence needs
  • Identify the M&E data requirements
    Data requirements should be built in from the start of all interventions to ensure successful data collection
  • Secure the resources for M&E
    Early consideration must be given to financial resourcing, ownership and coordination with delivery bodies
  • Conduct or commission the M&E
    Monitoring work is delivered in-house and by delivery partners. Evaluations are commissioned or conducted in-house
  • Use and publish the M&E findings
    Guidance should be followed for publishing evaluation reports and monitoring data (e.g., as management information)

Figure 2 provides an overview of the fundamental monitoring and evaluation expectations instilled on those developing policies and programmes in the department. For further detail on these expectations and the stages at which they take place, see Annex A2.

4. DESNZ’s monitoring and evaluation vision

DESNZ’s vision is to deliver effective, impactful and innovative monitoring and evaluation outputs that underpin an ever-improving evidence base for decision-making. This vision is the driving force behind our strategy for improving monitoring and evaluation across the department, setting out a clear picture of our long-term ambitions.

To achieve the vision, DESNZ aims to:

  • Establish comprehensive and proportionate monitoring and evaluation coverage across all policies and programmes in DESNZ and its partner organisations.
  • Firmly embed monitoring and evaluation into governance processes to ensure that monitoring and evaluation are delivered successfully, even in challenging circumstances.
  • Build capacity and capability in policy, project delivery and analytical professions to conduct and commission monitoring and evaluation based on best practice.
  • Facilitate a positive learning culture across DESNZ where lessons from monitoring and evaluation inform policy and programme decisions and delivery as well as future monitoring and evaluation design.
  • Maintain independent and transparent quality assurance of findings, so that stakeholders can have confidence in the findings generated from the monitoring and evaluation of DESNZ policies and programmes.
  • Encourage the exploration of new and innovative monitoring and evaluation approaches to enhance current capabilities and improve efficiency.

Further detail is provided in the sections below.

We will work towards the vision over time and undertake a range of activities to meet the supporting aims.

4.1 Establishing comprehensive monitoring and evaluation coverage

In line with the constant need to balance high quality monitoring and evaluation against other priorities such as efficiency and value for money, DESNZ is working to establish appropriate but proportionate coverage through a number of actions.

4.1.1 Requiring well-developed monitoring and evaluation plans

For every business case proposing a significant, innovative or contentious investment of taxpayer funds, DESNZ requires a robust monitoring and evaluation plan.

For every Impact Assessment proposing a high impact regulatory change involving a statutory review commitment, DESNZ requires a detailed Post Implementation Review [footnote 12] (PIR) plan.

Monitoring and evaluation plans should include detail on the design and methodological approach as well as on practical considerations such as resourcing and budget. For further information on the criteria by which DESNZ assesses monitoring and evaluation plans for investment and regulation, see Annex A3.

4.1.2 Encouraging best practice in our partner organisations

DESNZ is a ministerial department supported by 14 partner organisations (agencies and public bodies). [footnote 13] A considerable proportion of DESNZ’s expenditure is through these partner organisations (POs), for which the department is ultimately accountable to Parliament. POs are responsible for the monitoring and evaluation of the policy areas which they deliver, and DESNZ supports best practice in various ways:

  • By ensuring POs work together with policy teams in DESNZ to produce monitoring and evaluation plans before policy or programme implementation, so that evidence requirements are fully considered.
  • Through sharing best practice and quality standards with POs.

For more information on considerations for ensuring proportionate coverage, see Figure 3 in Annex A2.

4.2 Embedding monitoring and evaluation into governance processes

Governance is a key supporting mechanism for enabling us to track our performance and ensure accountability. To ensure proportionate monitoring and evaluation are delivered, even in challenging circumstances, DESNZ is striving to firmly embed thinking around monitoring and evaluation through reviewing and strengthening our new and inherited governance structures.

Governance structures in DESNZ include investment, delivery and management boards which perform an audit function to ensure proportionate and successful planning, resourcing and delivery of monitoring and evaluation. For further information on key governance structures for investments, legislation and evaluations, see Annex A3.

4.2.1 Central tracking of monitoring and evaluation commitments

The department’s Central M&E Team has a key role in tracking the progress of monitoring and evaluation across the department and POs and escalating any key issues to departmental boards.

4.3 Building monitoring and evaluation capacity and capability

Embedding expertise is essential to ensuring comprehensive and effective monitoring and evaluation coverage. DESNZ takes several measures to build the monitoring and evaluation capacity and capability amongst its analytical, policy and project delivery professionals.

4.3.1 Offering a comprehensive package of training and resources

DESNZ offers internal monitoring and evaluation training for policy officials which emphasises the importance of considering monitoring and evaluation at all stages of the policy cycle and collaborating with analysts with expertise in monitoring and evaluation during policy or programme development.

The department also increases analytical evaluation capacity and capability through a number of means including:

  • Providing in-depth monitoring and evaluation training as part of our core learning and development offer for DESNZ analysts to ensure they have the skills required to deliver effectively.
  • An internal Monitoring & Evaluation Network which meets regularly to support the delivery of good quality evaluation through facilitating peer learning and the circulation of practical resources.

4.4 Facilitating a positive learning culture

The effectiveness of our policies depends on a strong and safe culture for monitoring and evaluation, where timely and accurate feedback and analysis assess what effect the intervention has had, and this learning is fed back rapidly into policy or programme decisions.

DESNZ recognises that an important part of this learning process is to acknowledge that when policies do not deliver the desired effects – indeed, even when they produce unexpected or unwanted effects – these are still valuable opportunities to develop our knowledge and enable future policies to be adapted to secure better outcomes.

Our aim is to increase the number of evaluations that inform and influence better policy delivery and decision-making across our policy areas through enabling the flow of information on what works well and less well. Within the department, this is facilitated by an internal online database of key analytical outputs from the policy cycle.

4.4.1 Engaging with other departments to learn from what works

To learn from shared challenges and best practice on monitoring and evaluation, DESNZ works closely with other government departments. As a couple of examples:

  • DESNZ sits on the Cross-Government Evaluation Group (CGEG) - this is a cross-disciplinary group with representation from most major departments which aims to improve the supply of, stimulate demand for, and encourage the use of, good quality evaluation evidence in government. The purpose of this is to improve policy development, delivery and accountability.
  • Within specific policy areas, cross-government working groups are also important. For example, the cross-government International Climate Finance (ICF) [footnote 14] Monitoring, Evaluation and Learning team meets monthly with representatives from DESNZ, the Foreign Commonwealth and Development Office and the Department for Environment Food and Rural Affairs. This group enables joint working to evidence common objectives and develop a clear overall picture of the impact of ICF funding.

4.5 Maintaining independent, transparent quality assurance of findings

The department recognises that impartial and effective assessment of quality is critical to ensure the reliability of our monitoring and evaluation findings. Therefore, both internal and external quality assurance processes have been thoroughly embedded.

During an evaluation project, policy teams follow their own quality assurance procedures in line with the DESNZ Evidence Framework. They also convene steering groups at key stages during the research to quality assure and influence strategic decisions.

4.5.1 Ensuring all published evaluations are peer reviewed

The DESNZ External Peer Review Group (PRG) has been set up to provide independent scrutiny to quality assure published evaluations and increase the credibility of our findings. The group is comprised of independent evaluation experts allowing for a wide range of perspectives can be drawn upon.

The process is as follows:

  • At key stages of evaluation projects, especially during scoping and design, teams are encouraged to consult the PRG.
  • Prior to publication, all evaluations should be sent for peer review by two reviewers with expertise in the relevant policy areas and evaluation methodologies. Comments are then provided, and evaluation teams work to address these in the published version of the report.

4.6 Encouraging new and innovative monitoring and evaluation approaches

The department aims to be at the forefront of innovation with regards to monitoring and evaluation. What is new and innovative naturally changes over time, so a particular emphasis will be placed on:

  • Maximising the impact and use of innovative and existing data sets. This helps to improve the rigour of monitoring and evaluation, minimise research burden and ensure that we continue to provide cutting-edge evidence to inform decision-making. For example, the department uses existing published National Statistics – the Digest of UK Energy Statistics (DUKES) and Energy Trends – to measure progress towards policy objectives, such as decarbonising the power sector by 2035, through tracking the amount of electricity generated by renewables. For more information, see Annex A1.
  • Actively adopting and considering innovative approaches to monitoring and evaluation. Over the past two decades, there have been huge changes in approaches used for monitoring and evaluation – this pace of change will likely continue. For example, the department will seek to use automated and reproducible approaches to monitoring and analysis where possible – leading to improvements in efficiency of data collection and processing, and increased accuracy in data analysis.
  • Remaining proactive in maximising opportunities from emerging trends. For example, rapid developments in Artificial Intelligence offer the potential for changes in areas relevant to evaluation such as designing primary research, conducting evidence reviews and analysing data sets.
  • Maintaining strong connections to national and international experts to ensure the department has access to the best skills, expertise, and evidence base.

5. Next steps

This is the first Monitoring and Evaluation Framework for DESNZ. It sets out both our current position and a clear pathway for how we can expand best practice in monitoring and evaluation across all our ambitious energy and net zero interventions.

We are seeking to maximise opportunities for learning and improving through ensuring comprehensive monitoring and evaluation coverage, embedding effective governance processes, building policy and analytical capacity and capability, facilitating a positive learning culture and encouraging new and innovative approaches.

We recognise that it will take time and resource to implement the vision and goals within this Framework in full. It will require consistent demand and expectation for monitoring and evaluation evidence from senior managers and decision-makers, adequate resourcing and evaluation capability from our policy makers and analysts, and effective cooperation with our partner organisations. For these reasons, we will use the Framework as a platform for promoting conversations and collaboration amongst our staff, partners and other stakeholders within and outside government.

The department’s Central Monitoring and Evaluation team will regularly review progress against our aims and update the Director of Analysis and senior boards as appropriate. This will include:

  • Assessing the development of monitoring and evaluation planning across the department.
  • Mapping out the application of current governance and quality assurance processes to identify gaps and further areas for improvement.
  • Monitoring the range and take-up of learning and development opportunities to build departmental monitoring and evaluation capability.
  • Ensuring internal accessibility of learnings from evaluations and tracking the delivery of our commitments to external publication.
  • Scoping opportunities for utilising new and innovative approaches for both monitoring and evaluation.

Annex A1. DESNZ monitoring and evaluation case studies

Case Study 1: The relationship between Monitoring, Evaluation and Benefits Management in the Smart Meter Implementation Programme

The Smart Meter Programme uses a combination of monitoring and evaluation activities to track and understand whether anticipated benefits are being realised and support delivery of the programme. 

Smart meters are replacing traditional gas and electricity meters as part of a national infrastructure upgrade that will make the energy system more efficient, maximise the use of renewable energy and deliver net zero. As of March 2023, 32.4m meters have been installed in homes and small businesses (57% of all eligible meters) [footnote 15].

A key anticipated benefit of the roll-out is that consumers will use the data and feedback on energy use provided by smart meters to reduce their energy consumption. [footnote 16] While energy savings are a high priority for benefits management, the type of evaluation research required to assess their delivery does not support regular monitoring, in part because of the significant lags involved to collect sufficient data to ensure that analysis is robust (and not impacted by, e.g., seasonal trends or short-term effects).

To address this, the Programme carried out a comprehensive early evaluation (the Early Learning Project) [footnote 17] which combined quasi-experimental analysis of energy savings from early installations with theory-based evaluation (drawing on survey and qualitative research) to identify the conditions and enablers that would lead to savings from subsequent installations being maximised.

This work informed a series of leading indicators for energy saving benefits (e.g., the provision of tailored advice at installation), which were integrated into monitoring activities to provide regular information about the delivery of this benefit area and complement periodic full evaluations of energy savings. It also informed several follow-up projects. [footnote 18]

Case Study 2: Mixed Method Evaluation of the Reformed Renewable Heat Incentive Scheme

The Reformed Renewable Heat Incentive (RHI) scheme aimed to encourage the installation and use of renewable heat technologies (RHTs) and support the development of a sustainable market for renewable heat that was less dependent on subsidy. The scheme was originally designed to meet the requirements of the European Union (EU) Renewables Directive (2009/28/EC), but the focus shifted to decarbonisation following EU exit.

The evaluation was designed to understand the impact of reforms to the scheme on both the Domestic and Non-Domestic RHI and to provide lessons for future policy development. It adopted a theory-based evaluation approach and was informed by the principles of realist evaluation; seeking to develop, test and refine realist theories about the reformed RHI throughout its lifetime.

Evidence was collected across multiple workstreams, undertaken between 2017 and 2022. These included: analysis of RHI administrative data, a reoccurring applicant survey, qualitive research with applicants and with the renewable heat supply chain, quasi-experimental analysis (regression discontinuity design), a Sustainable Markets Assessment, a Subsidy Cost-Effectiveness Assessment, and a Competition and Trade Assessment. Evidence from these activities was synthesised using realist “Context, Mechanism, Outcome (CMO)” theories.

The evaluation found that overall, the reforms were successful in increasing the subsidy cost effectiveness for both schemes, as well increasing confidence in investments in the supply chain and positively influencing the take-up of heat pumps. Furthermore, the evaluation reported high levels of satisfaction with renewable heat technologies- satisfaction with domestic heat pumps was 84% within six months of installation, increasing to 92% after two winters.

There were two final reports produced, [footnote 19] and learnings from the evaluation have fed into the design of several policies including the Green Gas Support Scheme – where they supported the decision to make tariff guarantees a compulsory part of the application process; and Boiler Upgrade Scheme – where they supported decisions on grant levels.

Case Study 3: Monitoring Power Sector Decarbonisation using published National Statistics

Government departments publish a wide range of official statistics to serve the public good. These include UK National Statistics, which have been independently reviewed and accredited by the Office for Statistics Regulation as fully compliant with the high standards of trustworthiness, quality and value in the Code of Practice for Statistics. [footnote 20] Where it is possible to use existing published official or National Statistics to monitor the progress of policies rather than needing to set up separate M&E streams, this can save time, cost and provide robust metrics. 

The Government set a target to decarbonise the power sector by 2035. This means that more electricity generation must come from low carbon sources (renewables and nuclear), which do not emit carbon dioxide at the point of generation. The remaining fossil fuel generation must be offset by carbon capture, utilisation and storage technologies. This policy is monitored through published National Statistics, with UK power sector emissions reported in the annual Greenhouse Gas Emissions Statistics. [footnote 21] Further detail, including the volume and shares of electricity generated from each fuel in the UK, is published in the annual Digest of UK Energy Statistics (DUKES) and quarterly Energy Trends statistical publications. One metric used to monitor the progress of power sector decarbonisation is the share of low carbon electricity generation over time (published in DUKES table 5.6 [footnote 22] and Energy Trends table 5.1 [footnote 23], which has increased from 23% of UK electricity generation in 2010 to 56% in 2022. 

Published statistics can and will evolve as user requirements change. A separate row showing the share of electricity generation from low carbon technologies was added to the Digest of UK Energy Statistics (DUKES) to help users, to make the metric accessible and remove the need to sum the renewables and nuclear shares themselves. 

Case Study 4: Monitoring the Social Housing Decarbonisation Fund (SHDF)

New official statistics have been developed to monitor the outcomes of specific schemes, such as the SHDF, in an authoritative manner and complement internally focused monitoring work to support policy fine tuning and delivery in real-time.

The SHDF aims to improve the energy performance of social rented homes to reduce carbon emissions and fuel bills, tackle poverty and support green jobs. [footnote 24]

To ensure efficiency and maximise existing learning, the approach for producing official statistics for SHDF has drawn upon the approaches, pipelines and processing systems developed for previous schemes such as the Energy Company Obligation and Green Homes Grant schemes. All grant recipients of the SHDF are required to provide data to DESNZ on a monthly basis. This data is channelled into the department’s secure data management system (DMS). It is then extracted by a team of statisticians who move the data to an analysis platform (CBAS) where they perform a series of quality control checks on the data. [footnote 25] The output is a monthly official statistics [footnote 26] release on gov.uk [footnote 27], providing a transparent evidence base on the outcomes of the scheme for use by government, Parliament and the public.

Project delivery teams use grant recipients’ data to produce dashboards for internal monitoring in order to understand progress relating to key indicators (such as measures installed) and milestones agreed during the planning stage. This helps to target support to SHDF grant recipients and ensure that the scheme remains on track.

As well as official statistics and internal monitoring, the data gathered contributes to more detailed evaluations to understand whether this use of public funds under the SHDF has delivered value for money and the intended benefits. It is also used for other statistical, research and fraud prevention purposes.  

Annex A2. Standards for monitoring and evaluation in DESNZ

In DESNZ, monitoring and evaluation should be built into the design of the intervention and considered throughout its development and implementation. In addition to maximising efficiency and minimising financial cost, this will enable both the intervention and the monitoring and evaluation to be tailored to maximise the potential for robust, useable findings that can help to improve current implementation as well as future decision-making. The outputs and learning from earlier evaluations should feed into the rationale, objectives, and appraisal of future interventions. Accordingly, policy teams are expected to contact their local analysts as early as possible when designing a new intervention.

Expectations for monitoring and evaluation planning in DESNZ

Early thinking about monitoring maximises our ability to track progress, and building in evaluation considerations to the intervention design ensures that a reliable understanding of the outcomes can be achieved. Without this, appropriate baseline data may not be gathered and a comparison or control group unavailable to help understand what would happen in the absence of the policy or programme. 

There are eleven core expectations for monitoring and evaluation planning in DESNZ:

1) Demonstrate how learning from previous monitoring and evaluation has been addressed in the intervention design

The monitoring and evaluation plans produced during the policy or programme development phase should include information on how previous evidence has informed the development of the intervention. If no such evidence exists, an explanation should be provided.

2) Clarify the intervention objectives and anticipated effects in a Theory of Change

As outlined in the Magenta Book, developing a Theory of Change is important for creating a common understanding and framework for considering the proposed inputs associated with the policy or programme and the causal chain that leads from these inputs through to the expected outcomes and impacts. [footnote 28]

The Theory of Change should be developed and tested with key stakeholders to ensure that any issues and assumptions are identified and checked against reality. Once the theory has been finalised, it should be signed off by the Senior Responsible Officer and/ or the project/ programme board. It should then be updated regularly as the intervention and evidence base develop.

3) Assess what level of monitoring and evaluation is proportionate

Not all interventions will require the same level of scrutiny or have the same learning needs. A pioneering intervention which is high risk and costly, for example, is likely to require comprehensive monitoring throughout implementation and a large-scale evaluation involving a range of approaches and methods. Whereas a low-risk, well-evidenced or low priority intervention may only necessitate a light-touch monitoring and evaluation exercise to confirm has been delivered as intended and achieved the predicted outcomes. Therefore, monitoring and evaluation activity must be prioritised to ensure efficiency and value for money.

Table 1: Summary of proportionality factors which inform scale of M&E activity

Factors supporting increased spend/ effort on M&E Factors supporting decreased spend/effort on M&E
For spend interventions The investment is of high monetary value. The investment is of low monetary value.
For regulatory interventions The impact expected is high (see Regulatory Policy Committee (RPC) proportionality guidance [footnote 29] and Magenta Book supplementary guidance [footnote 30]).

There is a statutory obligation to produce a post-implementation review (PIR).
The impact expected is low.

There is no statutory obligation to produce a PIR.
For all interventions The intervention is high profile, of strategic importance or contentious.

There is a high degree of associated risk, uncertainty or novelty.

M&E is likely to fill a strategic evidence gap and expand the evidence base on what works.

M&E findings are likely to influence intervention delivery and ongoing rollout, e.g., if monitoring data is needed in order to take decisions during the implementation period.

Distributional impacts are expected which mean that certain population groups will be disproportionately affected.
The intervention is less strategically important.

The nature of the intervention’s causal mechanisms and impacts are well understood or can be estimated by considering a small number of factors.

Further M&E is unlikely to add to the evidence base.

There will be few opportunities to influence the delivery of the intervention.

There are no or limited distributional impacts expected.

Table 1 outlines several criteria to guide considerations on the scale of planned monitoring and evaluation activity. These criteria provide a level of flexibility to enable considered and sensible judgements to be made on design but also on resourcing and budget. Flexibility is particularly important given that, in some scenarios, interventions may relate to factors in both the supporting increased effort/ spend and decreased effort/ spend categories.

For novel interventions which are rolled out at pace and will run for only a short time, there are additional considerations relating to what is feasible given timing constraints. In these instances, there may be trade-offs between the timeliness and rigour of monitoring and evaluation activity. However, a priority must be placed on ensuring that timely monitoring is conducted from the beginning of implementation so that opportunities for positively influencing intervention delivery are maximised.

4) Assess what evidence is needed, who will use it, and when will it be required

When developing a monitoring and evaluation plan, it is important to have a clear idea from the start about the audience and intended use of monitoring and evaluation findings as well as the timing of decisions evidence needs to feed into e.g., spending reviews. [footnote 31] This will be crucial in determining for example what data needs to be collected.

By understanding the range of requirements and their relative priority, monitoring and evaluation can be tailored to generate the relevant evidence to the required timescales, and a decision can be taken early about the questions which can realistically be answered in the desired timescales.

At this stage, the links between the benefits management, monitoring and evaluation requirements should start to be considered.

5) Identify the evaluation objectives and questions

Evaluation objectives need to be clearly defined and meaningful to narrow the focus of the evaluation and ensure that the findings are relevant to decision makers. It is important to return to the Theory of Change when developing objectives to understand the underlying assumptions and anticipated outcomes and impacts that need to be tested as well as the evidence gaps that need to be filled.

Examples of evaluation objectives include:

  • Offering ‘lessons learned’ to inform development of the planned main Heat Network Investment Project pilot scheme and any future similar schemes.
  • Understanding the impact of providing additional funding to Local Authorities to target energy efficiency installations among private rented sector properties within the Green Deal Communities project.

Evaluations can be designed to answer a wide range of potential questions. It is important to be clear from the outset what these questions are and how the findings from them are expected to be used, by whom and when. Well-developed evaluation questions should reflect the objectives of the evaluation, the objectives of the intervention, as well as the priorities and evidence needs of stakeholders.

Examples of evaluation questions include:

  • What has worked well, less well and why?
  • How much of the impact can be attributed to the intervention?
  • Have different groups been affected in different ways, how, why, and in what circumstances?
  • Is the intervention a good use of resources?

6) Identify the appropriate monitoring approach  
All projects and programmes should undertake monitoring of appropriate indicators and metrics. [footnote 32] Monitoring is used for many purposes, as set out in Chapter 2. It is important at the outset for the intended purposes and impact of monitoring to be identified, including but not limited to evaluation. Once all uses of monitoring have been ascertained, consideration should be given to the data requirements (discussed further on in this Annex).

The monitoring approach should be underpinned by the selection of appropriate and SMART (Specific, Measurable, Achievable, Relevant and Time-bound) indicators which use good quality data sources and reflect the direct consequence of the activity undertaken. While indicators associated with inputs and outputs are typically used in performance monitoring and ‘in-flight’ decision making during the delivery period, outcome and objective indicators are most suited to monitor or evaluate longer term policy or programme impacts.

To achieve the benefits of monitoring, it is important that consideration is given to how evidence on the progress of an intervention (e.g., key performance indicators and other metrics) can be reported at the right time and to the right decision makers (policy and other stakeholders).

7) Identify the appropriate evaluation approach

Evaluations in DESNZ should use robust design approaches and methodologies recommended in the Magenta Book [footnote 33] to enable different aspects of our interventions to be understood, such as impact, causality, effectiveness, efficiency, value for money and unintended consequences.

Approaches and methodologies should be considered on a case-by-case basis and chosen to best reflect evidence needs. Where appropriate, experimental and quasi-experimental methods can be used to test impact and causality, either at an aggregate level or for subsets of impacts and processes. Where such approaches are unsuitable, theory-based methods can effectively establish the extent to which evidence shows that the intervention caused the desired change and whether other possible explanations for the change can be discounted.

Theory-based methods tend to be particularly suited for the evaluation of complex interventions or simple interventions in complex environments. In these situations, where determining the effect size can often be difficult, theory-based methods can confirm whether an intervention had an effect in the desired direction. They can also explain why an intervention worked, or not, and inform translation to other populations, places or time periods.

Decisions on the evaluation approach and methodologies should only be taken once proportionality has been considered, evaluation objectives and questions have been clearly articulated and there is a well-developed understanding of the context, available evidence and capacity for data collection. For complex programmes or policies, often a mixed method approach is most appropriate for maximising learning opportunities.

8) Identify data requirements for monitoring and evaluation

Monitoring and evaluation data requirements should be built in from the start of any intervention, so that data collection processes and analytical pipelines are established alongside programme or policy design and related legislation. Where this does not occur, an evaluation may be impossible, severely limited or unnecessarily expensive, and many monitoring activities (e.g., auditing) may be more difficult.

Planning activities should include:

Identifying what sources of data and other ongoing data collection processes already exist: Existing sources can often meet at least some of the evidence needs, create a richer data set, enhance the analysis, avoid duplication of data collection and reduce burden. Sources may include administrative data, monitoring data from other schemes, Local Authority data or larger-scale (long-term) surveys. When carried out effectively, combining data sets can enable complex monitoring and evaluation questions to be answered in a cost-effective way. Care must be taken to assess the quality and usefulness of the data and ensure accurate unique identifiers can be collected to allow records to the paired, e.g., Unique Property Reference Numbers. In the absence of unique identifiers, thoughtful consideration must be given to the matching protocols.

Identifying additional data requirements: As a department which often delivers policies through intermediaries (for example, Local Authorities) rather than directly, DESNZ may not have access to sufficiently accurate or complete data from existing sources. There are a number of considerations, e.g., whether outcome related data or financial data relating to policy/ programme expenditure will be required. It is important to define data collection roles and responsibilities and ensure that appropriate systems and permissions are in place.

Data protection and security requirements: Data protection requirements should be considered from the intervention design phase so that data can be shared, used, and stored appropriately. This will often involve building data collection into required standards and delivery arrangements with external partners e.g., through conducting a Data Protection Impact Assessment or setting up Data Sharing Agreements and scheme privacy notices.

Digitising and automating data collection and processing: In line with the Analysis Function Reproducible Analytical Pipelines (RAP) strategy [footnote 34] monitoring data should be made reproducible, re-usable, high quality and efficient. DESNZ has set out an implementation plan [footnote 35] responding to the RAP strategy and setting out how the department will provide the right tools, the right capability and the right culture. Data should be ‘digital-by-default’, meaning that, where possible, systems supporting data ingestion, storage, processing and visualisation should be digital and automated to improve efficiency and data accuracy, and reduce administrative burden. Existing systems and pipelines should be used where appropriate.

Organising data collection:  How data collection will be coordinated should be considered early on, e.g., by creating a logical framework (logframe) with appropriate indicators to show what is being measured as well as baselines, milestones and targets to measure progress. Logframes should be developed with the input of delivery partners.

Monitoring data requirements for regulatory post implementation reviews

Monitoring data can provide a relatively light-touch evidence base for regulatory Post Implementation Reviews (PIRs). In cases where the regulation is of low impact and it is not proportionate to undertake substantial primary data collection or additional analysis, monitoring data can be particularly useful. In some cases, monitoring information relevant to a regulation may already be captured on a regular basis. In other circumstances, it will be necessary to plan for further data to be collected and for a more comprehensive evaluation to be conducted (see Annex A3 for further information on PIR planning expectations).

Further evaluation data requirements

Good quality evaluations are only possible with consistent, accurate and complete data. In addition to monitoring and administrative data, evaluations often require additional data to be able to test how the inputs, outputs, outcomes and impacts of the intervention are linked.

Quantitative data should be collected to assess change in outcomes in both the ‘treated’ and counterfactual groups – e.g., surveys conducted before and after intervention implementation to assess change in attitudes and behaviour.

Qualitative data should be collected to assess implementation processes and whether anticipated outcomes and impacts have materialised (those which cannot be assessed quantitatively); identify any unintended outcomes, recognise the influence of wider contextual factors, or participants’ experiences or perceptions of implementation; and examine the processes involved in transforming inputs into outcomes to understand what works for whom, how, and in what context.

Relevant evaluation data would normally be collected before the intervention starts (baseline), during the delivery (interim), and after completion (follow-up), to allow ‘before and after’ change to be assessed. The exact length of time for collection of ‘after’ data should be aligned with when outcomes and impacts are expected to materialise.

9) Secure the resources

As already mentioned, a judgement is required on the scale and form of monitoring and evaluation that is required for an intervention, including whether it should be commissioned externally or conducted (either partly or wholly) in-house. In some circumstances, it may be useful to undertake a scoping or feasibility study to support this decision-making process.

All evaluations, even those commissioned to an external contractor, will require significant internal input (from analysts, policy professionals and project delivery professionals) to ensure that they are designed and delivered successfully. The same goes for monitoring, even if some of the monitoring work will be done by a delivery partner. Several resources will need to be considered, as follows:

Financial resources: It is not possible to give a fixed sum or proportion of budget for monitoring and evaluation, as it will vary with the considerations above and the type of data required. Externally commissioned evaluation budgets can range from tens of thousands to millions of pounds depending on the level of evidence and resource required.

Management and ownership: In keeping with the Aqua Book [footnote 36] principles, the DESNZ Evidence Framework is a QA approach with clear and explicit roles and responsibilities, where QA is continuous and proportionate and there is an explicit audit trail that records the evidence sources, review and clearance associated with evidence and analytical products.

Steering groups: Establishing a steering group for the monitoring and evaluation will help to ensure activities are designed and managed to meet the requirements of relevant stakeholders and remain on track. A steering group will usually include the policy lead(s) and relevant team members, supporting analyst(s), economists responsible for the project appraisal, key delivery partners, other government departments, finance, auditing, digital and IT colleagues. Steering groups are particularly important for large-scale monitoring and evaluation, but this type of scrutiny and support is always useful, even for light-touch monitoring and evaluation. Existing governance groups could be utilised in these cases.

Delivery bodies: Successful monitoring and evaluation will depend on the engagement and cooperation of those organisations and individuals involved in delivery of the policy or programme. In many cases they will be the face of the intervention and will have a huge impact on the data quality and its usefulness for monitoring and evaluation. Minimising the burden placed on stakeholders should be a key consideration.

10) Conduct or commission the monitoring and evaluation

Conducting monitoring – including work on data protection arrangements, creating and managing ‘pipelines’ for data ingestion, processing, visualisation and publication – requires dedicated resource within DESNZ and delivery partners.

All evaluations should be managed by a dedicated internal project manager and have defined terms of reference and a project specification. In cases where commissioning is necessary, internal guidance on procurement should be followed.

11) Use and publish the monitoring and evaluation findings

At the time of planning monitoring and evaluation, consideration should be given as to how the findings will be used and disseminated.

Monitoring publications

To ensure public transparency and accountability, monitoring data is published as statistics, transparency publications or management information. DESNZ adheres to the Code of Practice for Statistics and other guidance from the Office for Statistics Regulation [footnote 37] in relation to such publications to enable the orderly release of data in an accessible form.

Evaluation publications

DESNZ expects all our evaluations to adhere to the five principles of the Government Social Research Protocol. [footnote 38]

  • Principle 1: The products of government social research and analysis (which includes evaluation) will be made publicly available
  • Principle 2: Every report should have a transparent technical annex, so findings could be replicated
  • Principle 3: Government social research and analysis must be released in a way that promotes public trust (findings should not be influenced by political concerns)
  • Principle 4: Arrangements will be developed for all social research and analysis produced by government (DESNZ should publicly announce what research projects have been commissioned and publish high-level information regarding those projects)
  • Principle 5: Responsibility for the release of social research and analysis produced by government must be clear (in the case of DESNZ, this is the Government Social Research Head of Profession)

As outlined in Chapter 4, DESNZ requires all externally commissioned evaluation reports to be sent for independent peer review prior to publication, and reviewers’ comments to be addressed.

Other ways of disseminating and using the evaluation evidence should also be considered early and reviewed regularly by the steering group, such as identifying the best communication channels, and outputs, to reach users. One-page summaries, video outputs, infographics and engaging tools can be considered to help key messages reach the right audience. The format of outputs should be agreed with all evaluation stakeholders, inform delivery and key decision points and feed into new policy development.

Evaluation project closure procedures facilitate the use of findings:

  • Quantitative evaluation data should be appropriately anonymised and where possible submitted to a secure data repository, such as the UK Data Service [footnote 39], to facilitate further analysis.
  • All relevant materials should be submitted to DESNZ’ internal online database of key analytical documents.
  • DESNZ evaluation reports should be published on gov.uk or devtracker [footnote 40] as well as on the forthcoming cross-government Evaluation Registry. Post Implementation Reviews (PIRs) should be published on gov.uk and legislation.gov.uk alongside any RPC opinions on the quality of evidence (for more information on PIR requirements and governance, see Annex A3).
  • Evaluation outputs need to be easily accessible for policy makers and available when they can influence decisions.
  • Reports should have a transparent technical annex, so findings can be replicated.

Annex A3. Key Governance Processes

Evaluation governance

There are several governance processes that apply to the commissioning and implementation of evaluation projects in DESNZ.

During the design and commissioning phases:

  • All proposals for commissioned evaluations are reviewed and quality assured by a panel of senior analysts before approval is given for procurement.
  • Information on monitoring and evaluation planning should be submitted during the design phase to the department’s internal monitoring and evaluation tracker. [footnote 41]

During the project:

  • Policy teams follow their own quality assurance procedures in line with the DESNZ Evidence Framework. They will also convene steering groups at key stages during the research to quality assure and influence strategic decisions.
  • The department’s Central M&E Team regularly review key risks and updates on evaluations submitted to the department’s internal monitoring and evaluation tracker, relaying any points of interest to the DESNZ Director of Analysis and senior boards.
  • The Central M&E Team also review the monitoring and evaluation progress of projects within the Government Major Projects Portfolio [footnote 42] every six months and provide relevant updates to the Director of Analysis and senior boards.

During the publication phase:

  • All evaluation reports are submitted for external peer review by independent evaluation experts. Reviewers’ comments are then addressed ahead of publication, see Chapter 4 for further information.
  • The Government Social Research Publication Protocol is expected to be adhered to by all DESNZ research and evaluation projects. [footnote 43]
  • All evaluation reports should be uploaded to DESNZ’s internal online database of key analytical documents.

Investment Governance

The DESNZ Portfolio and Investment Committee (PIC) scrutinises and approves significant, [footnote 44] risky or contentious investments of taxpayer funds proposed by the department at Strategic Business Case, Outline Business Case and Full Business Case stages. As demonstrated in Figure 3, if PIC approval is required, the first stage is the keyholder stage where experts from across the department review business cases to assess whether they provide PIC with enough information to make an informed decision. If PIC approval is not required, the Business Case should follow the Director General Group’s procedure for spending approval.

Figure 3: Diagram showing DESNZ investment governance processes

Develop business case

Either:

PIC approval not required:
Director General Group’s procedure for spending approval

Or: PIC approval required:
Portfolio Investment Committee (PIC)
Keyholder review and agreement

A monitoring and evaluation plan meeting the standards detailed in Figure 4 is expected within the management section of the business case. This is assessed at keyholder stage by the Central Analysis team and then by PIC.


Figure 4: Diagram detailing DESNZ investment M&E planning requirements

Strategic Business Case

Provides an initial assessment of the monitoring and evaluation activity required.

Includes:

  • How previous M&E has informed policy development
  • Expected impacts and outcomes of the intervention
  • Likely scale of the evaluation in relation to proportionality
  • Provisional data requirements
  • Initial estimate of budget and resources for M&E
  • Consideration of scope for piloting / testing prior to implementation

Outline Business Case

Provides a more developed picture of the scale of ambition and emerging plans.

Includes previous points and:

  • A theory of change (logic model and narrative of what is expected to happen) reviewed by relevant stakeholders and signed off by the SRO/ programme board
  • the main monitoring and evaluation objectives
  • the likely uses and users of M&E evidence
  • High level timetable of main stages of evaluation planning and delivery against programme timetable

Full Business Case

Details scope of work to ensure sufficient resources and clear responsibilities.

Includes previous points and:

  • a set of evaluation questions
  • proposed arrangements for data collection, including responsibilities for data access and quality arrangements with delivery partners
  • an initial monitoring and reporting framework including a log frame of key indicators, data sources, milestones, targets and reporting plan
  • detail of research approach and methodologies
  • M&E budget and resourcing

Regulatory governance

Departments and partner bodies are required to produce impact assessments (IAs) assessing the costs and benefits of regulatory changes prior to consultation, enactment and implementation. Post implementation reviews (PIRs) of these regulatory changes are a key element of the policy-making cycle and provide an evidence-based evaluation of the effectiveness of a measure after it has been implemented and operational (for a certain period of time). Where applied, review clauses within primary or secondary legislation currently impose a statutory duty to carry out PIRs within a specified timescale. Evidence from PIRs will support decisions whether to renew, amend, remove or replace the measure.

Figure 5: Diagram showing DESNZ regulatory governance processes

1. Develop IA / PIR and Clearance Statement

Either: Chief Analyst sign-off not required

2. Policy team’s analytical sign-off procedures
Go to no. 4.

Or: Chief Analyst sign-off is required

2. Central Analysis Team review
3. Chief Analyst sign-off
4. Submit for Special Advisor and Ministerial sign-off
5. Submit for Regulatory Policy Committee opinion
6. Submit for Cabinet Office write-round

IAs and PIRs of regulations which are high impact, high profile or based on novel/ innovative analysis require review by the DESNZ Central Analysis team and sign-off by the Director of Analysis prior to ministerial sign-off for publication. Those that do not require this level of sign-off follow their policy team’s analytical sign off procedures, as shown in Figure 5. The analysis of both IAs and PIRs meeting certain criteria is also scrutinised by the RPC. [footnote 45]

For all regulations at IA stage, it is expected that a summarised monitoring and evaluation plan will be included within the IA. In DESNZ, a more detailed PIR review plan following an internal template is also required for every high impact regulation with a statutory review commitment.

These plans should be grounded in Magenta Book principles for conducting PIRs [footnote 46] and reviewed by the Central Analysis team using the following checklist as a guide.

PIR plans should:

  • Be comprehensive and proportionate to the scale of the regulation(s) under review
  • Outline the key objectives of the regulation(s) under review
  • Identify any existing evidence/data and cite the source
  • Justify the decision to collect/not collect new evidence
  • Where applicable, include realistic and suitable timeframes for evidence collection
  • Include research questions that are relevant and consistent with the key objectives
  • Outline how key stakeholders will be engaged

Detail plans for accountability and progress monitoring as well as handover.

  1. An “intervention” is any policy, programme or other government activity meant to elicit a change. 

  2. BEIS monitoring and evaluation framework 

  3. The Magenta book 

  4. The green book: appraisal and evaluation in central governent 2020 

  5. OECD: Monitoring and evaluation 

  6. The Magenta book 

  7. See NAO: About us for more information on the role of the National Audit Office. 

  8. See Regulatory Policy Committee for more information on the Regulatory Policy Committee 

  9. The Magenta book 

  10. Government social research publication protocols 

  11. For further guidance on benefits management, see the Infrastructure and Authority’s Guide: Guide for effective benefits management in major projects 

  12. A Post-Implementation Review provides an evidence-based evaluation of the effectiveness of a regulation after it has been implemented and operational for an appropriate period of time. 

  13. Department for Energy Security and Net Zero 

  14. International climate finance 

  15. Smart meters statistics 

  16. Smart meter roll out: cost benefit analysis 2019 

  17. Smart metering early learning project and small scale behaviour trials 

  18. One project identified best practice approaches to energy efficiency advice: Best practice guidance for the delivery of energy efficiency advice to households during smart meter installation visits. Another project trialled innovative approaches to energy consumption feedback such as apps: Smart energy savings SENS competition

  19. Domestic RHI evaluation: Reforms to the domestic renewable heat incentive evaluation. Non Domestic RHI evaluation: Reforms to the non domestic renewable heat incentive evaluation 

  20. Code of practice for statistics 

  21. UK greenhouse gas emissions statistics 

  22. Electricity: Chapter 5 Digest of United Kingdom energy statistics (DUKES) 

  23. Electricity: Section 5 Energy trends 

  24. Social Housing Decarbonisation Fund Wave 2 

  25. One of the quality assurance checks involves reviewing installations against Trustmark records to ensure that energy efficiency measures are registered and supplied against the required quality standard. Another involves comparing the information from the SHDF scheme with other energy efficiency schemes to ensure that costs and energy and carbon savings are broadly consistent across the schemes and understand any differences. 

  26. Produced in line with the Code of Practice for Statistics. 

  27. Social Housing Decarbonisation Fund statistics 

  28. The Magenta book 

  29. Proportionality in regulatory submissions guidance 

  30. The Magenta book 

  31. Spending reviews typically take place every two to five years. They normally set departmental budgets for three to five years ahead and shape the scale and nature of public service programmes and public investment. 

  32. For example, number of applications, number of smart meters in homes and small businesses across Britain, or % of people with smart meters who say they’ve taken steps to reduce their energy use. 

  33. The Magenta book 

  34. Reproducible analytical pipelines strategy: Section 1 

  35. Analysis function reproducible analytical pipelines (RAP) strategy 2023: DESNZ implementation plan, Local strategic plan 

  36. The Aqua book: Guidance on producing quality analysis for government 

  37. For example, OSR Regulatory guidance for the transparent release and use of statistics and data: PID - when data are quoted publicly they should be published in an accessible form 

  38. Government social research publication protocols   

  39. UK Data Service 

  40. Devtracker FCDO 

  41. An online database which has been created to record monitoring and evaluation activities across all DESNZ policy areas. 

  42. Major Projects Authority 

  43. Government social research publication protocols   

  44. Projects with a Whole Life Cost of over £50m to DESNZ (or other delegated authority where it applies). 

  45. Regulatory Policy Committee 

  46. Magenta Book supplementary guide. Guidance for Conducting Regulatory Post Implementation Reviews