Guidance

Local growth programmes evaluation strategy

Published 11 January 2024

1. Introduction

Purpose

This document provides an overarching summary of the Department for Levelling up, Housing and Communities’ (DLUHC or ‘the department’) current, as of January 2024, evaluation approach for programmes which target local growth (referred to as ‘local growth programmes’ throughout this document). It is the department’s first portfolio-level evaluation strategy for local growth programmes. This builds on the recently published DLUHC evaluation strategy, providing a more comprehensive discussion of how DLUHC is approaching evaluation for local growth programmes.

The intended audiences for the document are:

1. Organisations and individuals that have an interest in how DLUHC is approaching evaluation activity for local growth programmes.

2. Evaluation practitioners in the public and private sector that DLUHC will work with to complete evaluation activity. This document provides key information about how we have currently set up and approached evaluation activity for local growth programmes. This document will also be useful for stakeholders who are completing their own place-based evaluation.

3. Policy and programme officials who lead the development and delivery of new local growth funds or programmes. This document provides a useful framework discussing different types of evaluation, different levels of impact evaluation and common issues that need to be considered in impact evaluation which can support developing evaluation objectives.

What is evaluation?

Evaluation is the systematic assessment of the effectiveness of an intervention’s delivery, its impact and its value for money. High quality evaluation of policy interventions allows for systematic learning of ‘what works’, ‘what doesn’t work’ and ‘why’. This evidence provides greater accountability for our spending decisions and enables evidence-based policy making. DLUHC’s evaluation strategy outlined our commitment to evaluating key policies and to build better, more effective and efficient public policy interventions and public services.

What type of funding is covered by this strategy?

This strategy applies only to programmes or funds which satisfy both of the following criteria:

  • Levelling up programmes which target local growth to some extent (for example, those which aim to improve economic performance at a sub-national level), and
  • a spending or tax incentive programme where DLUHC is accountable (that is, it does not include other types of interventions such as devolving powers, legislation etc. or those led by other government departments)

It is not appropriate for devolution of powers (for example, Level 4 Mayoral Combined Authorities or Trailblazers) to be covered by the strategy established here since it follows different institutional and funding arrangements. However, many of the principles outlined may be applicable to devolution evaluation undertaken by DLUHC too.

In addition, housing related programmes are not in scope as there is already a separate Housing monitoring and evaluation strategy.   

2. Evaluating local growth programmes

This section provides an overview to evaluation for local growth programmes – What it is? Why we do it? How we do it?

What is evaluation?

Evaluation is a systematic assessment of the design, implementation and outcomes of an intervention. It involves understanding how an intervention is being, or has been, implemented and what effects it has, for whom and why. It identifies what can be improved and estimates its overall impacts and cost-effectiveness. The Magenta Book sets out the government guidance for evaluation.

For local growth programmes we may conduct several types of evaluation, including:

a. Process evaluation – Analysis of whether an intervention is being or has been implemented as intended, and what aspects of it are working more or less well and why. These evaluations can be useful to, for example, inform changes to delivery for the intervention or future interventions, evidence delivery effectiveness and provide early insight on whether an intervention is likely to achieve its objectives.

b. Impact evaluation – An objective test of what changes have occurred, the scale of those changes and an assessment of the extent to which they can be attributed to the intervention. These evaluations can be useful to, for example, communicate the impact of interventions to interested stakeholders, understand when interventions work and for whom, and inform policy decisions on future interventions.

c. Value for money evaluation – An assessment of the benefits and costs (for example, were they in line with expectations, could they have been lower/higher) and a comparison of the 2. These evaluations can be useful to, for example, communicate whether interventions are a good use of resources which supports policy decisions on future interventions.

Why is evaluation important?

The 2 main purposes for carrying out an evaluation are:

a. Learning – to gain an understanding of what works, for whom and when, and generate evidence for future policymaking.

b. Accountability – the department is committed to be transparent to stakeholders on the effectiveness of our programmes.

The department has a wide ranging and pivotal role in the government’s agenda of levelling up the whole of the United Kingdom, ensuring that everyone across the country has the opportunity to flourish, regardless of their socio-economic background and where they live. Across this ambitious agenda, evaluation is necessary to generate learning and provide accountability.

Evaluation can also help all decision makers (for example, DLUHC, local authorities, Mayoral Combined Authorities etc.) to understand what works in different places from across the UK and why. Furthermore, evaluation can provide insights into how an intervention has been implemented and its impacts, who has been affected, how and why.

This evidence can help decision makers to make informed decisions on whether to launch, continue, expand, or stop an intervention and thus can help to ensure that we are investing public money wisely.

How do we decide what to evaluate?

DLUHC outlined in its evaluation strategy how it prioritises evaluation activity, which apply to local growth programmes too, and is summarised below. We are mindful that given resource constraints we must take a proportionate approach and prioritise evaluation activity.

We consider several criteria to guide our decisions on evaluation priorities including, but not limited to:

  • outputs, reach, and impact of the policy (including equalities impacts where appropriate)
  • the extent of innovation / novelty inherent in the programme
  • costs, financial commitments and liabilities incurred by the policy
  • the profile of the policy and likely level of scrutiny
  • contribution to the evidence base and ability to fill key evidence gaps
  • feasibility and cost of evaluation activity
  • impact on delivery partners

Because of these guiding principles, the department is confident that its efforts are focused on priority evaluations of its most influential local growth policies and programmes, as well as those which are innovative or novel.

Who evaluates local growth programmes?

DLUHC will lead evaluation activity for local growth funds it delivers. This may include, depending on the evaluation objectives, developing insights on local places and particular intervention types.

As announced as part of reforms to simplify the funding landscape for local authorities, DLUHC have removed the requirement for places to conduct local-led evaluations in most situations. DLUHC recognises that mandated local-led evaluations have imposed disproportionate burdens on local government for the additional insight produced. Central-led evaluations have better data access and capability to conduct robust impact assessments, and so can deliver quality place-level insights within central department-led evaluations.

However, in some situations the department may ask places to complete their own evaluation where it is deemed proportionate and appropriate. Additionally, places may wish to complete their own evaluation for accountability purposes or to learn from the work. Evaluation can, for example, help to understand what worked well and less well in their delivery of the project, who was impacted and to what extent, and whether it was a good use of resources. The What Works Centre for Local Economic Growth provides a range of guidance, training and support for impact evaluation that local policymakers are encouraged to access.  

How will we disseminate evaluation findings?

The department is committed to being transparent with evaluation findings to support wider understanding on what works and what doesn’t work. In line with the Government Social Research Publication Protocol, the department is committed to publishing the main finalised reports of all its local growth evaluations on gov.uk. The department seeks to publish any associated documentation (trial protocols, completed interim reports, data collection measures, data sets etc.) following appropriate data protection procedures. DLUHC will design and implement a dissemination plan for each evaluation to support learning across organisations and stakeholders. This will include creating clear guidance on how we expect our evaluation reports to be presented.

The department jointly funds the What Works Centre for Local Economic Growth, who provide various resources and advice to help make local growth policy more cost-effective through better use of evidence and evaluation. DLUHC will work closely with the Centre to help disseminate the findings of local growth impact evaluations, and any lessons learnt in terms of evaluation practice to places.

3. Update on current evaluation activities

We are making significant progress in evaluating local growth programmes and have recently published several evaluation reports including, but not limited to:

A repository of relevant evaluation documents for local growth funds is provided on the Local growth evaluation homepage.

The department also has many process, impact and value for money evaluations planned for several local growth funds as displayed by Table 1. These evaluations will be delivered over numerous years depending on programme maturity. Process evaluations will typically be completed first as impacts, and by extension value for money evaluation, take time to materialise. For example, the Levelling Up Fund impact feasibility report concluded that it wouldn’t be advisable to complete impact evaluation until at least until 2027/28. This is explained in further detail in Chapter 5.

Table 1: Planned evaluations of local growth programmes by DLUHC (sorted alphabetically)

Fund or programme Process evaluation Impact evaluation Value for money evaluation Comments
Community Ownership Fund  
Enterprise Zones      
European Regional Development Fund      
Freeports  
Getting Building Fund * * DLUHC is assessing the feasibility of completing impact and value for money evaluations
Investment Funds     Impact evaluations are completed wherever it is possible and proportionate to do so, and are completed by places as part of Gateway Reviews
Investment Zones  
Levelling up Fund  
Levelling up Partnerships * * * DLUHC is currently developing evaluation plans
Local Growth Fund * * DLUHC is assessing the feasibility of completing impact and value for money evaluations
Pathfinders      
Towns Fund  
UK Shared Prosperity Fund  

Key: ✓ evaluation planned; *May be undertaken depending on feasibility, proportionality and prioritisation

There will be other, similar evaluation activities being completed by other government departments for programmes they are delivering that may contribute to local growth. These are out of scope of this document.

Additionally, if you would like to understand what monitoring DLUHC conducts on different local growth interventions, the department publishes monitoring guidance in a central location on GOV.UK.

4. Different levels of evaluation for local growth programmes

The 3 main evaluation approaches (process, impact and value for money) can be conducted at multiple levels, feasibility permitting. These levels vary in granularity and coverage, answer different questions and provide evidence from different perspectives. The right level(s) of analysis that will be appropriate for evaluation will depend on the aims and objectives of the evaluation as well as the policy design. It may not be appropriate, proportionate or feasible to do all of these for every evaluation. Across the portfolio of local growth programmes, evaluation at different levels can provide holistic insight into what works and effectiveness for future policy development.

Programme-level

This is the overarching level of analysis and considers how the policy or fund operates as an entire programme for example, a programme-level evaluation of the Levelling Up Fund can reveal whether the entire programme is achieving the desired impacts and whether it is value for money overall. Conducting programme-level process evaluations will typically focus on how the programme is being delivered and implemented from a central perspective for example, how has the overall design of the programme affected implementation?

Place-level

Place-level evaluation is focused on assessing a group of interventions in a geographical space (for example, a town, city centre). There may be many projects in one place that are likely to have similar outcomes over a similar timeframe and are geographically close enough to influence the same individuals or businesses. It can be useful to understand the interactions between many projects, for example on delivery decisions as well as economic and people-focused outcomes in a place.

Conducting a place-level evaluation requires defining a geographical space and accounting for the policies and, in some cases, other funds being delivered within those boundaries. Rather than considering projects and funds individually, place-level evaluation could take as a starting point any shared outcomes and impacts across the policies, and the contribution these interventions have on the net change in those outcomes.

For example, how has resident’s local pride changed through various initiatives which collectively help regenerate their city centre. In the case of a process evaluation, it may consider, for example, how is a local authority in receipt of the Levelling up Fund and UK Shared Prosperity Fund implementing policies intended to boost pride in that place.

Intervention theme-level

It is possible to consider an evaluation from the perspective of the types or themes of interventions being delivered. There are different ways of grouping interventions into themes. Themes may be based on, for example, projects that are the same intervention-type (for example, all active travel interventions) or interventions that are expected to target similar outcomes and mechanisms for achieving these. See Case Study 3 in Chapter 5 for an example based on the Levelling up Fund.

Sometimes projects may straddle multiple themes, however it is usually possible to identify discrete elements that fall under one theme. Evaluation at this level will cover multiple projects from different places that come under the selected theme. This can be an appropriate way to provide learning on the process, impact, and value for money of a particular intervention theme. They may, for example, help build understanding on how interventions impact local growth.

Project-level

A project is a specific intervention being conducted in a place. This is distinct to intervention themes which refers to a type of intervention more generally (for example, funding apprenticeships), whereas a project would be, for example, one local authority’s specific apprenticeship scheme. Project-level evaluations can be conducted to provide evidence of what is working in those contexts. Case studies are one example of a particular method that is typically done at the project level. Project-level evaluations can be targeted to fill specific evidence gaps that have been identified (for example, if there is a particularly novel way of delivering a service and there is an interest in understanding its scalability and impact).

Impact and value for money evaluations can be undertaken at the project-level, although these are dependent on the size of the project (is the sample of people, businesses or places large enough to make the analysis statistically robust) and data availability (can the outcomes of interest be measured). These are explained further in Chapter 5. Even though projects are delivered locally, project-level impact evaluations may be better conducted or commissioned centrally because of the evaluation capacity and capability required.

5. Impact evaluation approach

This chapter sets out how the department approaches completing impact evaluation, including a few case studies of evaluations that are underway.

Determining evaluation approach

The department is committed to undertaking robust impact evaluation of local growth funds. Our aim will be to create the best available evidence that meet the evaluation objectives in a proportionate manner.

The Magenta Book details different methods that are possible to use in evaluation. Depending on the design of the programme, impact evaluations can be conducted using experimental methods (methods that involve randomisation in treatment for example, randomised control trials), quasi-experimental designs (methods which lack randomisation in treatment but try to identify a similar group as a counterfactual for example, difference-in-difference), or theory based approaches (methods which attempt to draw causal conclusions in the absence of a comparison group for example, process tracing). These methods can be combined to provide additional rigour.  

Scoping and assessing feasibility of using different evaluation methods to achieve evaluation objectives is an important early step to complete. Ideally this is done to some extent during policy design and can be investigated in more detail later. Scoping and assessing feasibility at the start of developing an evaluation approach helps to establish what methods are suitable and proportionate to conducting robust evaluations before progressing down a certain route. This in turn helps ensure value for money for resources invested into evaluation. The department may commission external experts to conduct these given the extent of challenges with completing robust evaluation and providing an additional level of independence and impartiality. However, we may conduct these in-house where required analytical and policy expertise is available.

A feasibility assessment will recommend the most appropriate methods to generate robust evidence of the impact of policy, and the earliest time possible to conduct an evaluation. To do this, it will take into account:

  • the aims of the policy and will result in a Theory of Change that accurately reflects how a policy is being delivered
  • the context in which it is being implemented (including overlapping policies)
  • the timeframe it is being delivered in, and
  • monitoring/management information places are delivering, and what secondary data sources can be used to supplement or complement these

A feasibility assessment may consider all the levels of evaluation possible (from programme through to project-level) or focus on certain levels that are appropriate given the evaluation objectives. A thorough study would assess what evidence can be generated and its rigour, and will rule out methods that are not viable or disproportionate. The feasibility assessment may also draw on evaluation approaches being applied to other funds, although it will ensure that the approach is contextualised to each fund.

Issues to consider when designing an impact evaluation approach

There are several issues that should be considered when designing an impact evaluation for local growth programmes. The relevance of these issues will vary depending on the specific context of what is being evaluated. Some of the common challenges and potential solutions to these are outlined below. The potential solutions outlined should not replace detailed planning and scoping for evaluations as individual context will alter what is relevant. This planning and scoping would need to be completed by evaluation experts.   

1. Design of a programme

The design of a programme is a key determinant in how an evaluation is conducted. Often there can be a conflict between objectives of a programme and making evaluation easier and more robust. For example, in many situations randomised control trial (RCT) approaches, which are considered gold standard evaluations, are difficult because of the ethics and practicalities of randomly allocating a place to receive or not receive funds, or to roll out or not roll out a particular policy programme or infrastructure project. Additionally, often the department seeks to empower local places to decide their own interventions, which rules out evaluators assigning an intervention to a local place.

Meeting this challenge: there are many methods that can be used to robustly conduct impact evaluation and the most appropriate method, determined through scoping and assessing feasibility, should be selected. It may be possible to work with places and partners to deliver interventions in a particular way to support robust evaluation. For example, DLUHC is working with a local authority to deliver a randomised control trial for the UK Shared Prosperity Fund, which would represent a first for the local growth space (see Case Study 1 for more information).

2. Establishing a counterfactual in experimental and quasi-experimental impact evaluation

Counterfactuals are important for impact evaluation to attribute cause and effect through understanding what would have happened in the absence of intervention. For evaluations which aim to compare places as the unit of analysis, identifying a counterfactual – that is, a group of places which are not receiving the intervention but which are suitably ‘similar’ to those which are – can be difficult when:

  • There are a finite number of places at the various geographies policies are delivered at which limits potential options for counterfactuals for example, upper-tier and lower-tier LAs, mayoral combined authorities (MCAs). This means that it’s not possible to get a large enough sample size for a robust counterfactual comparison. Meeting this challenge: evaluators often split larger geographies into a number of smaller ones to increase sample size. Alternatively, it may be possible to compare the people or businesses in intervention areas with those in non-intervention areas, again increasing the sample size.
  • Some policies are delivered at bespoke geographies (those which do not align to administrative boundaries or area classifications), such as Freeports and Investment Zones, which creates challenges with defining similar areas. Meeting this challenge: comparing against neighbouring areas or, in the case of difference-in-difference methodology, other areas where parallel trends in key variables can be demonstrated can help to overcome this. It may also be possible to build similar areas from smaller geographies.
  • Places are unique - differing in physical size; deprivation; population size; dominant industries etc - so it is difficult to identify similar areas.  Meeting this challenge: a robust method to identify ‘similar’ areas which stands up to external scrutiny should be used. The most appropriate approach would depend on, for example, evaluation methodology, data availability and policy design. This should form part of a feasibility assessment.
  • The fund is allocated to all places in the country, leaving no options at all for a counterfactual. Meeting this challenge: looking at different focuses for evaluation (for example, not focusing on place but individuals or businesses) or more granular evaluation levels which don’t require comparing all places (that is, project-level).
  • Similar places are more likely to be awarded the same fund because, for example, it is allocated based on need. This means places which are not receiving the intervention are likely to be systematically different from those which are, and therefore not a suitable as a counterfactual. Meeting this challenge: It may be possible to, for example, compare places just above and below an allocation threshold; to compare similar places which received funding at different times; or to compare similar places which received different amounts of funding.

Case Study 1: UK Shared Prosperity Fund evaluation approach

The UK Shared Prosperity Fund (UKSPF) is a central pillar of the UK government’s ambitious Levelling Up agenda and a significant component of its support for places across the UK. It provides £2.6 billion of new funding for local investment by March 2025 with the primary goal to build pride in place and increase life chances across the UK.

This fund was allocated to places with every part of the UK receiving an allocation for the years 2022-23, 2023-24 and 2024-25. The amount each place in the UK received was based on a methodology and principles established by DLUHC, which included considerations on, for example, previous European funding countries within the UK received and which areas are most in need. This mixed methodology to allocate funding rules out a potential quasi-experimental approach based on intensity of funding provided to places due to potential relationships with the outcomes sought from the programme.

The structure of funding (that is, that all places received funding, are delivering over the same period and allocations being linked to outcomes of interest) means that a randomised control trial or quasi-experimental evaluation methods are not feasible at the programme-level for UKSPF. However, this does not mean robust impact evaluation is not possible.

To build a comprehensive picture about what works, UKSPF evaluation activity will take place across 3 levels: 

  • programme level - to assess the impact and value for money of the UKSPF overall, initially by trialling a regression-based approach to attempt to isolate the effects of the UKSPF from other local growth funds, places’ baseline economic contexts and wider confounding factors.
  • place level - to develop a detailed understanding of the UKSPF’s effectiveness across different types of place, considering their unique local characteristics and challenges, and focusing on interactions between stakeholders, local decision making, process efficiency and interactions with other local growth funds.
  • intervention and project level -
    • to assess the impacts of 10 specific intervention types across the UKSPF’s 3 investment priorities, and how well they have been delivered, using theory-based or quasi-experimental approach with treatment and control groups.
    • conduct a randomised control trial (RCT) in collaboration with a local authority as a first step in bringing experimental evaluation design to the local growth space. In delivering the RCT the department will, in addition to generating high-quality evidence for the participating local authority, build our understanding of how to deliver RCTs in such a complex local landscape, and of how to engage and upskill other local authorities ahead of rolling out RCTs more widely in potential future rounds of the UKSPF.

For more information, see UKSPF Evaluation Strategy

3. Isolating the impact of individual funds or projects

The UK government has delivered funding through area-based policies for decades. Broadly, these policies have similar goals of stimulating local growth through policy levers such as funding earmarked to regenerate residential, commercial or cultural areas within a community; tax reliefs intended to draw investors to an industrial area; or relaxed planning regulations intended to unlock barriers to regenerating high streets or speed up construction in housing zones. The beneficiaries of these policies tend to be places that have been left behind and where growth has stalled or stagnated. It is not uncommon for an area to receive funding through multiple DLUHC policies with similar aims.

In addition, several other government departments deliver policies that tackle the levelling up agenda in similar places. For example, alongside the transport and connectivity related aims of the Levelling up Fund, Towns Fund and UKSPF, the Department for Transport (DfT) has the City Region Sustainable Transport Settlements (CRSTS). The CRSTS1 invests in 8 city regions across England – all 8 also receive some combination of the DLUHC funds.

This densely populated funding landscape means it can be difficult to disambiguate which impacts can be causally attributed to each funded intervention (and therefore difficult to identify what is working and what isn’t). It is also relevant when trying to identify a suitable counterfactual, as just because an area, business or individual may not have benefitted from one particular programme it may have been impacted by another.

This is relevant for evaluation at all the levels discussed above. For example, when evaluating a project care would be needed to understand any linkages with other projects.

Meeting this challenge: This would need to be carefully considered in the research design and may inform the type of methodology used or how it is implemented. It may be possible, for example, to account for other projects in the model specification or through the decision of what counterfactual is appropriate.

Case Study 2: Different policies and programmes implemented in Merseyside and dealing with this challenge in LUF evaluation

Merseyside has several policies and programmes which can boost living standards, spread opportunity and restore local pride and empower local leaders among other things. These include both DLUHC local growth policies and other DLUHC or government department policies and programmes too. Examples of some of those operating in Merseyside at a similar point in time are outlined below, but this is not a complete list.   

DLUHC local growth programmes:

  • £900 million investment fund over 30 years as part of the Combined Authority devolution deal
  • Liverpool Freeport offering tax and customs incentives
  • Liverpool City Region Investment Zone focused on life sciences
  • £27.8 million Future High Street Funding for Birkenhead and New Ferry
  • £25 million Town Deal funding for Birkenhead
  • Community Renewal Fund in the Liverpool City Region
  • Levelling up Fund round 1 allocation for Liverpool (£20 million), Liverpool City Region (£35 million) and Wirral (£19.6 million)
  • Levelling up Fund round 2 allocation for St Helens (£20 million) and Knowsley (£15 million)
  • £20 million Levelling up Capital Projects funding for Sefton
  • £53 million UKSPF and adult numeracy programme Multiply allocation for Liverpool City Region

Other DLUHC or government department policies and programmes (non-exhaustive):

  • Education Investment Area for the Wirral and Liverpool
  • £710 million transport investment through the City and Region Sustainable Transport Settlements in Liverpool City Region for schemes such as battery power for new Merseyrail trains
  • £19 million through the Strength in Places Fund for the Infection Innovation Consortium
  • Devolution deals with Liverpool City Region devolving new powers over transport, planning and skills

The Levelling Up Fund (LUF) is a £4.8 billion capital fund jointly delivered by DLUHC and Department for Transport. The fund invests in infrastructure that improves everyday life across the UK, including regenerating town centres and high streets, upgrading local transport, and investing in cultural and heritage assets.

The presence of existing or historic publicly funded interventions could distort the findings of any impact evaluation approach and hence it is important for the methodology to account for these programmes. While an impact evaluation for LUF has not started, a scoping report suggested 2 potential approaches to deal with this issue:

  • using detailed data for each programme compiled through evaluation activity of the others local growth funds to control for in LUF impact evaluation
  • controlling for the effects of these different programmes using data compiled by DLUHC on spending and budgets at local authority level for different programmes. The department is working to provide access to quality-assured expenditure data from government departments at local authority level, and in some cases at neighbourhood level.

4. Hard-to-measure impacts and small effect sizes

Project impacts may be difficult to observe for several reasons:

  • The project is relatively smaller and hence the effects are likely to be smaller too. For example, about 40% of projects funded by the Coastal Community Fund were awarded less than £0.5 million in grant funding (see evaluation report for more information). In these situations, some economic outcomes (for example, improved economic growth) may not be statistically detectable with an impact evaluation and hence quantitative impact evaluation would not be appropriate. Although there is no rule over what sized intervention we would expect to be able to statistically observe an impact and it will likely vary based on several factors such as project type and outcome of interest.
  • Data is too aggregated to identify the change in an outcome expected from an intervention. For example, if data is only available at a local authority level, then it might not be possible to observe the changes created by an intervention that is targeted at a small area or population within the local authority.

Meeting this challenge: this should be factored into the evaluation design on what the most appropriate analysis is to do and how this should be completed. It may mean, for example, conducting primary data collection to observe changes, adopting theory-based evaluation methods or focusing impact evaluation on elements (for example, outcomes, intervention types, evaluation-levels) that are appropriate. More details on theory-based approaches are given in the Magenta Book

In some situations, alternatives to robust impact evaluation - such as developing case studies and using monitoring data – may help in understanding what has changed where it is less feasible and proportionate to complete a robust impact evaluation. These can also be used as part of a holistic and robust impact evaluation. See ‘alternatives to robust quantitative impact evaluation’ below for more information.

5. There are limitations in data that can be used for evaluation

There are 2 broad categories of data available for evaluation and both have their strengths and limitations, which can impact evaluation activity. Data could be primary (that is, data generated by those that use it) and secondary (that is, data that already exists). Monitoring data may include some primary and secondary data.

Primary data can be designed to the deliver data that meets the exact requirements for evaluation. However, this may involve substantial costs.

Evaluations can be conducted using secondary sources (for example, administrative or commercial datasets). There are secondary data sources available that are often freely available and, on many topics, related to, for example, economic activity, community life and education. However, not all secondary data may be appropriate given its characteristics. It may not have the required geographical granularity for example. Data requirements for impact evaluation are discussed in more detail in Chapter 6.

Monitoring data can be a timely source that is specific to what has been funded. It can include important information on, for example, what has been funded, where funding has gone and when projects were completed. Monitoring data does not provide us any information about areas which are not funded. This limits the usefulness of some pieces of monitoring data for impact evaluation, such as outcomes data, where it is necessary to compare against a counterfactual to robustly understand outcomes and impacts. Additionally, monitoring data supplied is often of varying quality and consistency given it is collected by many different partners. Given the large volume of data collected, it is difficult to ensure that data returns are validated and quality assured, or are consistent across returns.

Meeting this challenge: data availability is a key part of evaluation scoping and assessing feasibility. It is important to understand what characteristics data needs to complete different evaluation methods, and this will inform the approach. Primary data can address limitations and gaps in secondary data but can be costly to generate, and so it may not be proportionate to collect in many situations. The department is working with the Office for National Statistics to improve spatial data, which is explained in more detail in Chapter 6.

6. Managing high variability in funded interventions

Places can often be afforded a large degree of freedom in the type of projects they put forward for funding. This can make it difficult to compare projects as they may have divergent objectives. This is particularly a challenge where evaluation requires grouping projects together (for example, at programme or intervention level) to identify their average impact. Care would need to be taken to ensure the different projects all are expected to have shared outcomes or else this may result in underestimating their impact. Additionally, different types of projects within a group could cause differential sized changes in outcomes which is lost through aggregation. This is also a relevant challenge for project-level evaluation, where the findings are unlikely to be generalisable to other, different projects.

Meeting this challenge: this feature of many of the local growth funds demonstrates the importance of knowing the purpose of a given evaluation, as discussed in Chapter 2. For the purposes of central accountability, it might be useful to understand the average impact of a fund across places, even if that hides a large amount of variability. In this case, a programme-level impact evaluation might be most useful. On the other hand, the purpose may be to learn whether a particular type of intervention is effective, so that local decision makers know whether to fund it. In that case, impact evaluation at the intervention or project-level is needed. In each case, only impact evaluation can show whether the intervention has delivered the desired change.

Case Study 3: The Levelling up Fund (LUF) approach to managing high variability in funded interventions

The LUF is a complex programme with many different types of capital investment projects. Taking, for example, projects which are a transport investment, these include:

  • link roads to facilitate downstream development (for example, a new junction on the A50 to unlock housing and employment land to support the growth of the Infinity Garden Village)
  • projects aiming to reduce congestion and journey times to accommodate economic growth (for example, improvements to the A38 Dunball roundabout)
  • a project aiming to re-open a suspension bridge to create a new visitor attraction in County Durham
  • refurbishments to Leicester Rail Station to improve the experience of visitors

These are a diverse group of projects and can be expected to produce different types of results and outcomes.

An impact evaluation can assess the effects of projects both in terms of (a) their impacts in bringing about the types of change articulated in the goals of the Levelling up White Paper in the longer term and (b) their shorter-term intermediate results which are expected to lead onto these outcomes. As these latter aspects will vary across projects, the approach for LUF impact evaluation proposes classifying projects into categories that share similar causal mechanisms for bringing about the desired changes in local economic and social outcomes. For LUF, these categories were identified as:

1) Unlocking and enabling industrial, commercial, and residential development

2) Enhancing sub-regional and regional connectivity

3) Strengthening the visitor and local service economy

4) Improving the quality of life of residents

See LUF impact evaluation scoping study for more details.

7. Accounting for leakage, displacement and substitution

Evaluations of place-based interventions need to account for the fact that interventions may have impacts in areas where the fund/intervention/project wasn’t delivered. There may be positive and negative impacts that spill over into other areas and interventions may displace activity elsewhere (for example, a business moves location to the treated area to take advantage of benefits from the intervention). Meeting this challenge: these factors will need to be explicitly considered in an evaluation approach (for example, what areas are suitable to use as a counterfactual) and when analysing results (for example, the winners and losers from an intervention). The What Works Growth evidence reviews provide examples of studies where these factors have been taken into account. For example, a common approach is to compare outcomes in the treated area to those in nearby areas that are most likely to be affected by displacement. What Works Growth’s case study on the evaluation of the Local Enterprise Growth Initiative (LEGI) provides an example of this approach. In other cases, displacement might be implied. For example, What Works Growth’s rapid evidence review on rail investment highlights a study where the proportion of residents in employment increased following a new station opening, but as there were also changes in qualification levels and income levels and ethnicity, this is likely to reflect displacement of existing residents.

Case Study 4: The UK Freeports Programme evaluation and displacement

Freeports are special areas within the UK’s borders where different economic regulations apply. Freeports in England are centred around one or more air, rail, or seaport, but can extend up to 45km beyond the port(s), or further where there is a clear economic justification. The English Freeports model includes a comprehensive package of measures, comprising tax reliefs, customs, business rates retention, planning, regeneration, innovation, and trade and investment support.

A challenge for the impact evaluation will be to assess the additionality of impacts in the context of displacement. It will be especially important to understand effects of displacement of local economic activity from deprived areas. This will involve careful considerations around the geography of impacts, deadweight, and leakages.

To deal with displacement, the Freeports evaluation approach will:

1) assess the boundary of the impact to find areas which are likely to be positively affected by the creation of Freeports, but also identify areas which may be negatively affected. This is likely to be in line with the Freeports’ “travel to work areas”, which will be defined based on travel time and other economic factors. These areas will be separately included in the regressions to allow the positive or negative impacts on them to be quantified as part of the assessment of additionality of impacts.

2) use modelling exercises to analyse displacement. Results from the evaluation at a port level can be input into macroeconomic models to identify the wider economic impacts on the UK and local economies, controlling for displacement effects. This can provide a macroeconomic view and insights on additionality at a regional and UK level.

For more information, see the Freeports Evaluation Strategy

8. Capturing long-term impacts

Many policy impacts will take years to manifest to a mature enough stage to be measurable. This is the result of several factors, such as, time taken to complete an intervention which is especially relevant for infrastructure projects, time taken once an intervention has been delivered for changes to occur and lags in data which in many situations will be published a year or so after the period it refers to.

Measurable impacts typically occur after the funding delivery period has concluded, and the department has moved onto delivering a new set of policies – often repeating elements of the previous delivery model. This means evidence on what works takes several years to materialise and compounds challenges with identifying a suitable counterfactual (for example, if future interventions happen in areas that were previously thought to represent a counterfactual for a different policy).  This is relevant for evaluation at all the levels discussed in Chapter 4.

Meeting this challenge: in order to capture these impacts, evaluations should be planned for the long term. This may involve revisiting the approach at a later date if unforeseen changes occur that would negatively impact the robustness of results. It might also be possible to measure interventions impact on intermediate outcomes which we can be confident are causally linked to the ultimate goal. For example, a skills intervention may take a long time to have an impact on local wages, but it may be possible to evaluate the impact on resident qualifications over a shorter period, and we know that certain types of qualifications are linked with higher wages. Additionally, there may be more timely data sources that can be used to give an indication of impact on the outcome of interest (for example, real-time spending information on credit and debit cards may be useful to understand the level of economic activity in an area before estimates on gross value added are available).

Additionally, part of planning evaluation for the long term will be ensuring there is appropriate resource available over the full duration impacts will be delivered over and is a key recommendation by the National Audit Office.

Case study 5: Time taken to observe impacts from interventions

The Towns Fund (TF) was announced by DLUHC in July 2019, with total funding of £3.6 billion. The TF comprises of 2 funds:

  • Town Deals – aiming to drive the economic regeneration of towns to deliver long term economic and productivity growth. In September 2019, 101 towns in England were selected to develop Town Deals.
  • Future High Streets Fund – aiming to renew and reshape town centres and high streets. In December 2020, 72 places in England had been successful in funding applications.

Projects are likely to have a range of outcomes, some which are likely to be shorter term and others that will be longer term. The Towns Fund impact evaluation scoping study identified near term outcomes which included, for example, increases in footfall, change in community engagement and reduced journey times. Whereas longer term outcomes included, for example, increased household income, increased productivity and increased civic participation.

Projects funded to improve local growth can take time for their full impacts to be visible. This is especially the case for capital infrastructure projects such as those that have been funded through the TF.

For example, the local economic impacts of land redevelopment projects will largely not be visible until (a) developer agreements are in place, (b) planning applications have been approved, (c) construction of the new development is complete, and (d) new units are in productive use for example, have been leased to tenants. And even after this point there may be delays in observing impacts due to data lags.

For these reasons, full impacts will be expected to take a while to materialise. However, shorter term outcomes can be expected sooner. The TF impact scoping study concluded that some short and medium-term impacts would be expected to be realised by 2026. These are therefore going to be the focus of an impact evaluation at that time.

Alternatives to robust quantitative impact evaluation

While the department is committed to conducting robust impact evaluation for local growth programmes, sometimes it is not proportionate or possible to meet evaluation objectives through quantitative methods. In these situations, it might be appropriate to use alternative methods.

Theory-based evaluation methods can provide robust qualitative evidence and should be considered when designing an impact evaluation approach. These can demonstrate evidence that policies are leading to desired impacts, but without quantifying the size of these impacts. More details on theory-based approaches are given in the Magenta Book

Additionally, producing case studies and making use of monitoring data/management information provided by places can be used as part of impact evaluation. In-depth case studies would involve following policy development, implementation and delivery of results at the project-level. Case studies typically rely on qualitative evidence such as interviews, focus groups and surveys with policy beneficiaries, and as well as quantitative monitoring data if confidence in the quality is high. This approach can demonstrate some links between impacts and inputs, without assigning a quantitative value or saying how big the impact is.

Combining various methods together, for example theory-based evaluation, case studies and process evaluation, can provide a broader and more holistic overview of how impacts are being achieved, as well as showing how and why a policy is working.

Value for money evaluation

These issues outlined in this chapter are also relevant for value for money evaluations as it relies on impacts being causally attributed to an intervention. Quantifying or monetising observed impacts or benefits is a disproportionate use of resource if they cannot be said to have resulted from the policy itself. And even if impacts can be causally attributed to an intervention, it may still not be possible to monetise these.

However, it may be possible to forecast/model a value for money calculation (similar to the benefit-cost ratio in appraisal of options before a decision is made), but this does not accurately reflect the value a policy has returned.

6. Evaluation data

This section provides an overview of data used for local growth evaluation – what the types of data are, how data relevance varies by different types of evaluation and how we collect and handle data.

Overview on the data required by DLUHC

To allow DLUHC to understand what works, what doesn’t work and why, we will require and collect data to be used in the analysis. Data requirements will vary depending on the objectives of the programme, the type of evaluation (for example, process, impact or value for money) and the research question being analysed. However, the broad types of data that DLUHC will use are outlined in Table 2.

Table 2: high-level taxonomy of local growth programmes data sources

Data type Spatial scale Description
Beneficiary identifiers Individuals / businesses Identifiable information (for example, national insurance numbers and company registration numbers) concerning individuals and businesses who benefit from a given intervention.
Beneficiary data can be used to link to other datasets (for example, earnings and tax records of individuals held by HMRC and DWP) to create a richer dataset.
Monitoring Intervention Detailed information collected from every grant recipient (for example, Local Authority) through the performance reporting cycle. This information can include data on project characteristics (for example, location), spend, project delivery, outputs, outcomes and risk.
Open and public data Place Office for National Statistics (ONS), UK government and other open data sources can be used to provide information about a place such as local demography, economic activity, labour markets and detail of other funding (current and historic) received by places. For example, the ONS have a Business Structure Database which includes data on the number of people employed by businesses and their turnover.
Administrative data Place and individual Administrative data sources are created when people interact with public services such as hospitals, benefits system or tax services, and are collated by the government and anonymised. These can be useful data sources to understand individuals and people within a place. Example data include those held by HMRC through tax information. These would require data sharing agreements to access.
Commercial data Place, individual and business Professional data providers collect and supply data on different aspects about a place, individual or business. Topics covered may include business composition, movement of residents and measures of economic activity.
Primary data Individual and place Where data does not already exist, we may conduct new primary data collection activity (for example, surveys, interviews, focus groups). This may take the form of qualitative or quantitative evidence. This data can provide information at an individual and place level when aggregated.  For example, we are running a pride in place survey for UKSPF.

What data will be relevant for different types of evaluation?

The paragraphs below provide some examples of the types of data that may be useful for different types of evaluation. In all situations, data needs to be of sufficient quality to support robust evaluation.

Process evaluation

Process evaluations will typically draw on regular monitoring data provided by places. This may be supplemented with administrative datasets to provide a quantitative overview of the performance of each place against their key indicators and metrics. Primary data, such as interviews and focus groups with place leads and project managers, will provide qualitative insight to expand on quantitative data. These qualitative research activities will deliver on-the-ground, operational insight into what worked well during the delivery and implementation, and what did not work as intended.

Impact evaluation

Impact evaluation may use all the different types of data outlined in Table 2. These are complementary sources of data that can help provide a comprehensive picture on the impact of a programme. Which are used will depend on the impacts expected to occur from a programme and considerations on proportionality. For example, if the costs of purchasing commercial data or conducting a new survey are not proportionate to the intervention to be evaluated, these data types will not be used. Similarly, DLUHC will only collect beneficiary identifiers when it is possible to identify beneficiaries and where this data can demonstrably add value to evaluation activity.

Across local growth programmes, quantitative data required for impact evaluation will likely need to satisfy the criteria outlined below.

  • Data at the right spatial level: many interventions are expected to benefit relatively small areas and neighbourhoods. Unless large interventions, the impacts of the programmes are likely to only be visible at a small area level (for example, Lower Super Output Area (LSOA) and below), and the focus will need to be on sources of evidence with the desired level of spatial granularity. This does not necessarily imply needing a measure of the outcome for the LSOA when completing programme or intervention-level evaluation. Instead, it may be possible to construct sufficiently large samples of individuals or firms located in proximity to projects across a few or many areas using microdata with spatial identifiers.
  • Longitudinal data: it will be necessary to determine how the outcomes of interest have changed in the areas of interest following the completion of projects. This implies a requirement for before/after measures (at the minimum). In addition, some methods (for example, difference-in-difference) rely on data series prior to an intervention to identify a counterfactual.
  • Available for treated and control groups: to enable robust impact evaluation the same data needs to be available for both treated (that is, those impacted by the fund or programme) and counterfactual groups to be able to compare changes in outcomes. Data on characteristics is also likely to be needed for both treated and control groups to enable identification of a suitable counterfactual.

The case study below provides examples of data sources that were identified as potentially being suitable for the impact evaluation of the Levelling up Fund.

Case Study 6: Potential data sources identified for an impact evaluation of Levelling up Fund (LUF)

An impact evaluation feasibility report for the LUF concluded that it would be possible to evaluate the programme using secondary data sources. Several different data sources were identified as potentially being useful to measure the impact on:

For more details, see the LUF impact evaluation scoping report

Value for money evaluation

Value for money evaluation will rely on several data sources as it builds on impact evaluation. It is concerned with valuation of outcomes and impacts relative to costs. The main data requirements in addition to impact evaluation is the need to consider outcomes, impacts and costs in a comparative way typically through monetisation. Outcomes and impacts identified through impact evaluation could be monetised using values from the Green Book and Supplementary Guidance. Cost data can be obtained from finance monitoring information.

Primary data collection and improving data

To understand the differences between places, consistent data is required. There are many secondary data sources which can provide this information, such as those published by the ONS. DLUHC will always seek to use existing data where it is available and meets our needs. This will benefit places, grant recipients and beneficiaries as it means we will be requesting or collecting less data directly from these groups, and will only do so where we have no suitable alternatives. In some circumstances this may mean DLUHC has to establish data sharing agreements with data owners to facilitate the sharing of data.

However, in some circumstances existing data does not meet our needs. DLUHC is committed to improve spatial data and has established the Spatial Data Unit (SDU) to transform data use to inform place-based decision-making across central and local government. The SDU is making more data available at a granular level to support levelling-up policy and delivery by national and local partners. This includes leading the subnational expenditure project which aims to provide access to quality-assured in-year expenditure data for government departments and devolved administrations to local authority level, and in some cases down to neighbourhood level. In November 2023 the department published DLUHC enabled spend as official statistics in development, which provided estimates of both direct DLUHC, Homes England and Planning Inspectorate spend, and funding delegated to local authorities, at a granular and standard level of geography. These are new statistics designed to improve the collective understanding of how DLUHC spends money across the UK.

In addition to this, the SDU is also working in partnership with the Office for National Statistics (ONS) to strengthen local statistics through the transformation of economic and social indicators. Major published outputs from the SDU-ONS partnership include neighbourhood (‘Lower Super Output Area’, or LSOA) time series estimates of Gross Value Added (GVA), providing the most detailed data available to date on local economies.

These improvements in spatial data will enhance evaluation activity. For example, small area GVA can help us understand the changes local growth programmes have on economic activity in a local place.

DLUHC may, where proportionate and appropriate, conduct new survey activity for evaluation. This can be useful to fill gaps in data that already exist. For example, DLUHC is running a pride in place and life chances survey as part of the UKSPF evaluation.

Case study 7: Surveys supporting UKSPF evaluation

To support the UKSPF evaluation, DLUHC is introducing a comprehensive survey package aimed at understanding the impact of UKSPF - both in general and with reference to specific interventions - on people’s views on pride in place and life chances across the UK and in specific places.

The package consists of 3 components:

1. Community Life Survey (CLS) sample boost:

DLUHC have boosted the CLS sample for years 2023/24 and 2024/25 to produce representative results on pride in place and life chances at lower-tier local authority level in England.

The goal is to achieve an annual sample size of 175,000 (in England), with a minimum of 500 respondents in each of the 309 lower-tier/unified local authorities in England (with the possible exception of the Isles of Scilly).

2. Your Community, Your Say survey:

DLUHC will conduct a series of UK-wide surveys over 2 sweeps to capture local perspectives on the UKSPF, the interventions it is supporting, its delivery in places and its specific impacts on pride in place and life chances. This will be achieved through 2 variants:

  • bespoke surveys for place-level evaluation, comprising of deep dives in about 20 places
  • bespoke surveys for intervention-level evaluation to contribute to the impact evaluation of about 10 interventions

3. Local survey tool for Lead Local Authorities (LLAs):

A flexible tool is being developed to enable LLAs and local stakeholders to run surveys and collect data on Pride in Place and Life Chances themselves, to support local evaluations where planned and to enable a comparison with other geographies. The use of the tool is optional and LLAs will not be expected to use it to collect data for the overall UKSPF evaluation.

How will data be collected from places and grant recipients?                                            

To minimise burdens on grant recipients across evaluation activity, DLUHC will try to use existing data wherever possible. However, in some cases, it will be necessary for grant recipients to assist in gathering additional information. Data will be collected via 2 routes:

1. DLUHC may commission places to provide data during the programme life. One specific example is the information DLUHC request through regular reporting requirements. DLUHC may also make ad-hoc requests for additional data and information not contained within reporting requirements. These will be clearly communicated to places in advance of need.

2. DLUHC may work with contractors to deliver evaluation activity. These may collect data directly from places or beneficiaries.

Outside of regular reporting, places are not expected to start collecting data for any part of the evaluation until specifically asked to do so by DLUHC or our evaluation partners. However, places may benefit from conducting their own data collections to support their evaluations and build their own understanding of project delivery.

Data protection and permissions

It is important to obtain required permissions to collect and use data from those that participate in evaluation activity (for example, interviews, surveys etc.). All data sources used to support local growth programme evaluation activity - including personal data where appropriate - will be collected and processed in full compliance with data protection guidelines as set out in the Data Protection Act and the General Data Protection Regulation. DLUHC will carry out a data protection assessment for local growth evaluations and will be responsible for establishing data protection agreements where needed to facilitate the collection, sharing and processing of data with DLUHC, other government departments and contractors. Additionally, we will consider ethics in using data in social research as set out by the principles in the Government Social Research Professional Guidance.

Where appropriate and in line with data protection guidelines, DLUHC will seek to publish aggregated data and analysis underpinning the main evaluation reports to allow for additional examination and investigation by external experts and stakeholders.

7. Evaluation governance

Effective governance is important for ensuring there is efficient evaluation activity. DLUHC will consider what the most appropriate forms of evaluation governance are for different local growth funds and programmes. This may differ depending on, for example, proportionality and what is in scope of the fund. Two common evaluation governance bodies used for local growth funds are:

  • Evaluation Steering Group – a group that directs evaluation activity. It may include key stakeholders such as DLUHC senior analyst and programme officials, representatives from other government departments where the programme involves substantial elements of policy areas of these departments, the Evaluation Task Force in high priority evaluations and external advisors to DLUHC (for example, What Works Centre for Local Economic Growth). This group feeds into boards which oversee the delivery of the programme or local growth portfolio of programmes.
  • Technical Advisory Group – a technical group that scrutinise and provides constructive challenge to ensure evaluation activity meets quality expectations. They can also be used to generate ideas for the best way to evaluate a policy. They provide advice to the Evaluation Steering Group. This may include stakeholders with relevant technical skills (for example, analysts and delivery officials) from DLUHC, relevant other government departments and external experts.

DLUHC will seek to establish a cross-government local growth evaluation group to help share learning and provide oversight of the portfolio of evaluation activity being completed.

8. Next steps for DLUHC’s local growth evaluation strategy

As set out in Chapter 3, DLUHC is delivering local growth evaluation activity on several areas which will takes years to complete. DLUHC will continue to deliver this work and disseminate the findings accordingly. DLUHC will also undertake new evaluation activity, in line with the prioritisation principles outlined in Chapter 4. To support decisions on where to prioritise future evaluation activity, DLUHC will continue to refine and improve its strategy for evaluation of local growth funding and programmes (for example, by identifying evidence gaps where there is most value to conducting evaluation activity on). The department will aim to update this document with our latest thinking periodically where there is substantial change.

DLUHC will always seek to work in collaboration with key stakeholders (for example, other government departments and external advisors) to ensure our approach to evaluation is robust and meets key objectives.