UK Space Agency Evaluation Strategy
Published 22 August 2024
1. Foreword
We are responsible for unlocking the powerful and exciting benefits of space for UK citizens. This ranges from scientific exploration to economic growth, and inspiring people to pursue space careers. In a nutshell – we boost UK prosperity, understand the Universe, and protect our planet and outer space.
Evaluation must be at the heart of our work if we are to succeed. We are committed to undertaking relevant, high-quality evaluation which helps us learn how to build a better future and deliver the best outcomes for UK citizens.
In this updated Evaluation Strategy, we outline our vision for evaluating our impact and the way we support the UK space sector. We recognise the inherent methodological challenges in the area and are eager to find solutions; for example, the long lags often experienced between our intervention of the full realisation of benefits.
In the interests of transparency and the spirit of learning we will be publishing details of our evaluations through the Evaluation Taskforce Registry - a single location for all HMG evaluation, with the aim of making evaluations easier to find, more accessible and more impactful.
Finally, we invite anyone with a mutual interest in the UK Space Sector to work with us – we want to share our learning and learn from others.
Annelies Look and Chris White-Horne
Deputy Chief Executives & Chief Delivery Officers (job share)
2. Introduction
2.1 About us
The UK Space Agency is an arm’s length body and executive agency of the Department for Science Innovation and Technology (DSIT). We, together with other partners, including the UK Space Command, are responsible for delivering the National Space Strategy (NSS) on behalf of DSIT. We boost UK prosperity, understand the Universe, protect our planet and outer space. We power the UK space sector by catalysing investment, delivering space capabilities and missions and championing space. ‘We’ throughout this Strategy refers to the UK Space Agency.
We have developed this Evaluation Strategy in collaboration with DSIT to ensure our evaluation of delivering the NSS meets their needs and to help avoid duplication.
Our intended audience for this Strategy is anyone interested in UK space sector, and how we measure the impact of government intervention in it. This includes, but is not limited to:
- those working in the UK space sector, and related areas (eg, UK Research and Innovation)
- policy makers
- international space agencies/organisations
- evaluators, who might bid for future work
2.2 Why evaluation matters
We believe evaluation is critical to understanding our impact, deciding how to allocate our resources and deliver value for money for the taxpayer. The evidence base for understanding which space interventions work, why, and how is emerging. Space is a rapidly growing and evolving sector with new technologies, new capabilities and new markets. For example, the first Active Debris Removal (ADR) mission, has yet to happen (due in 2025), as such there is no evidence base on what works. Research, development and innovation (RDI), which is central to the space sector and something we invest in, is challenging to evaluate. There are necessarily long timeframes (sometimes decades) before the benefits and impacts of this work can be realised.
In recent years we have substantially increased our commitment to evaluation but recognise that there is more that we can and should do to build the evidence base. We believe that now, as we undergo a process of Transformation as an Agency (see section 3.6) and approach a Comprehensive Spending Review, is the right time to take stock, learn from our experience and outline how we will improve our approach to evaluation.
2.3 Collaboration
We recognise that others working across the UK government and the space sector, internationally too, face similar challenges. We outline how we would like to work together with others later in this report (section 3.6 to 4.2). We welcome challenge and want to share our learning.
2.4 Our commitment
We intend to publish full updates of this strategy in line with spending review cycles, determined by central government. In the interim, we plan to publish annual progress updates on high-level findings of our portfolio of evaluation and update the Evaluation Taskforce (ETF) Evaluation Registry with details of our published evaluations.
The rest of this document outlines our vision and objectives for evaluation, before detailing how we will achieve our ambitions.
3. Vision and objectives
3.1 Our vision for evaluation
Our vision is for evaluation findings to meaningfully inform programme design and spending decisions, primarily for space but also for others working in related areas, like research, development and innovation (RDI). Beyond the UK Space Agency, we want our portfolio of evaluation to help inform strategic thinking at DSIT, including for fiscal events.
To achieve this, it is vital that we have access to a strong body of evidence about what works so that we can increase opportunities for our interventions to maximise impact and value for money for the taxpayer. Evaluation of our programmes forms a large part of this body of evidence.
In line with the Magenta Book and the Green Book, the UK Space Agency seeks to produce as robust a body of evidence as possible through:
- embedding evaluation planning in new interventions from their inception to maximise the potential for meaningful evaluation
- disseminate and use findings in future programme development (both formative and summative) so that we can make the best decisions about where and how to deliver, closing the feedback loop of the policy making cycle
We have identified four broad types of challenge to doing space sector evaluation, as outlined in the box below.
3.2 Time lag between programme delivery and impacts being realised
Mid-Technology Readiness Level (TRL)
The UK Space Agency funds a lot of mid-level TRL projects (for example 4-6). As such our evaluation activity typically fails to explore what happens when products are brought to market, when impacts are most pronounced. While this varies by the type of product or technology, impacts are sometimes not realised 10-20 years after initial funding. For example, instruments for science missions can take decades to return data.
Research Development and Innovation (RDI)
In a similar vein, the UK Space Agency invests in RDI, which is difficult to measure the impact of; RDI can lead to new technologies, innovations, solutions and spillovers including outside the space sector, but it can happen over a long period of time and there is uncertainty about when benefits will be realised.
Catalysing investment
Measuring the impact of how UK Space Agency programmes catalyse investment is also difficult due to long time lags between making investments and seeing the impact of new technologies. Attribution and additionality are challenges here too - what can we attribute to UK Space Agency activity, what is the added value of UK Space Agency?
UK government and ESA (European Space Agency) spending cycles
Government spending cycles, determined by spending reviews, are much shorter than the time it takes for impacts to be achieved. It is difficult to make long-term commitments to funding evaluation beyond government spending cycles, as is needed to understand the extent of impact.
The UK Space Agency spends a high proportion of its overall budget through its ESA contributions. However, ESA spending cycles do not generally align with UK government spending cycles. This makes it difficult to ensure that the findings are available to inform decisions at the right time.
3.3 Identifying a counterfactual
UK Space Agency programmes often involve small sample sizes. Small sample sizes make it difficult to detect impact using a counterfactual as we lack statistical power.
Small sample sizes are partly due to the diversity of the sector, and the fact that it is made up of wide-ranging types of firms, organisations and individuals, which also makes it difficult to identify a counterfactual due to small pools for comparator groups.
Our interventions are often applied to complex and emergent systems, which make identification of a meaningful counterfactual challenging. For example, a programme designed to unlock the benefits of space for business – this is a new area, with multiple actors, creating a complex environment.
3.4 Data challenges
There is a dearth of relevant administrative/other data in the space sector that we can use for counterfactual analysis, for example:
- it is usually not possible to form counterfactuals from space sector survey data (eg, through synthetic control groups or counterfactuals), because they are not representative of the sector, nor granular enough for this purpose
- it is difficult to recognise space companies in ONS data; some space companies may have diversified into space, but do not present as such in ONS data if the parent company is categorised differently (eg, aerospace); there is not a distinct Standard Industrial Classification (SIC) code for space
- in addition, some organisations might fit our definition of the space sector, but do not perceive themselves as space organisations; and so, it is difficult to collect data on them
3.5 Difficulty quantifying outcomes and complexity
Nature of outcomes
While some UK Space Agency programmes seek to achieve tangible outcomes, which are easier to measure (eg, Launch), others are more abstract and harder to measure.
The International Bilateral Fund (IBF) programme seeks to increase the UK’s credibility in becoming a more meaningful actor on the world stage. This is hard to quantify. This makes it difficult to monetise benefits, presenting challenges to value for money evaluations and cost-benefit analysis.
Complexity
It is sometimes difficult to identify the impact and additionality of UK Space Agency programmes and projects. The UK Space Agency can be one of many investors and supporters, and it can be difficult to trace the UK Space Agency’s investment into the resulting value of a product, service or other activity.
The UK Space Agency might provide some early funding or an intervention very early on in the bigger picture. For example, an initiative under our Inspiration programme might be designed to encourage a young person to take up a STEM subject but it would be challenging for us to follow up with this young person decades later and, even if we could, to disentangle what role our intervention played relative to other factors.
3.6 Objectives
We will deliver our vision, and address the challenges, through five objectives:
- long-term evaluation: commit to evaluation beyond government spending cycles, through robust evaluation governance, to ensure meaningful learning
- standard of evidence: improve the standard of evidence through embedding evaluation in programme design, increasing the likelihood of more experimental or quasi-experimental designs, or meaningful theory-based evaluation where this is not possible[footnote 1]
- partnership working: collaborate with colleagues across government and beyond to ensure that we are learning and share our knowledge. We want to facilitate knowledge sharing between experts and evaluators
- commercial approach: clearly communicate our requirements with the market and understand what makes for attractive opportunities so that we attract the best bids when commissioning evaluations
- dissemination and learning: publish our evaluation findings, communicate them effectively, learn from them and demonstrate accountability for how we spend public money. In turn, increase the credibility of the Agency, and better satisfy the requirements of the National Audit Office (NAO) and HM Treasury (HMT), amongst others we are accountable to
The next section outlines how we will overcome the challenges, deliver our objectives and achieve our vision.
4. How we will deliver
4.1 Integrated Transformation Programme
We will work with colleagues delivering the UK Space Agency’s Integrated Transformation Programme (ITP) to help embed the right culture, governance structures and processes to achieve our vision. Transformation is one of the UK Space Agency’s priorities. The aim is to transform the UK Space Agency in a delivery-focused organisation that puts it people first.
4.2 Culture of celebrating evaluation
Creating a culture where evaluation is prioritised and celebrated will be central to achieving our vision. To help embed and promote the right culture we intend to run the following initiatives:
- space sector-specific evaluation training for staff
- evaluation surgeries for programme teams, run by members of the evidence team
- establish an evaluation community, within and beyond the UK Space Agency, of those interested in understanding and advancing evaluation of the space sector
4.3 Governance and processes
We need to get the right governance and processes in place to implement a robust portfolio of evaluation. We will be developing an internal implementation plan, providing more detail on the below.
Evaluation governance – panel of experts
We will convene a panel of evaluation experts from across government (up to three members) who will meet up to twice a year. This panel will review our portfolio of evaluations, providing scrutiny of our performance against our objectives, highlighting areas of improvement and make recommendations. The Evidence Team, at the UK Space Agency, will provide the secretariat function and report into senior colleagues at UK Space Agency.
Business cases and project management
Business cases in government provide the evidence base in support of a spending proposal. They facilitate transparency, approval, post evaluation, the accountability of public funds and optimisation of public expenditure. We will be designing evaluations at an overarching level, rather than of individual programmes or projects.
More generally, we will develop comprehensive project management and evaluation tools and processes to promote a consistent and efficient approach across evaluations and throughout the lifecycle of evaluation, from programme inception and evaluation design, through to evaluation learning.
Working in this way will help us to streamline our resources and help ensure we are able to provide comprehensive coverage.
Clarity of roles and engagement points
We will ensure that roles and responsibilities of those working on evaluation at the UK Space Agency are clearly delineated, to make sure we are getting the most from our people. This includes, but is not limited to, the UK Space Agency’s Evidence Team, Programme Teams and Commercial Team.
Quality assurance
Developing robust quality assurance processes for the full evaluation cycle, from design and inception through to analysis and reporting will help us to improve the quality and consistency of our evaluations. Amongst other measures, this will include clear sign-off processes and the use of peer reviewers.
Procurement
To attract the best bids, we will maximise the overall contract value of tenders through commissioning contracts at an overarching level (as discussed above). We will emphasise our requirement for evaluation expertise above all else.
Publication
We plan to publish annual progress updates on high-level findings of our portfolio of evaluation. We will publish individual evaluations on GOV.UK and update the Evaluation Taskforce (ETF) Evaluation Registry (a central registry of all government evaluation) with details of our published evaluations when it goes live.
4.4 Partnership working
We recognise that there are others across government and in its arm’s length bodies who are working in related fields and experience similar challenges. We want to both learn from others and contribute to this community.
We will work closely with the Department for Science Innovation and Technology, as our sponsoring department, to ensure that our portfolio of evaluation meets their needs. We will work with DSIT to tackle methodological challenges in evaluations, disseminate findings to enhance evidence-based programme design and decision making.
We are in conversation with UK Research and Innovation (UKRI), including Innovate UK, about their approach to monitoring and evaluation; we intend to keep these conversations going.
A list of all partners that we have agreed to work with is provided below.
- Department for Science Innovation and Technology (DSIT)
- UK Research and Innovation (UKRI)
- Innovate UK
- Department for Energy Security and Net Zero (DESNZ)
- International space agencies
- Department for Transport (DfT)
- Met Office
- Department for Business and Trade (DBT)
This list is not exhaustive – we intend to work with many more organisations. If you think we should be speaking to you about space sector evaluation or have feedback about our evaluations, please contact evaluation@ukspaceagency.gov.uk.
4.5 Proportionality
We intend to conduct higher-level, more overarching evaluations. Within areas we will be proportionate in how we allocate resources. We will be guided by budget, strategic priority/profile (as summarised below), with the most resources being invested in Level 3 programmes. Pilots, which could inform the direction and shape of future programmes, (whether they continue, on what scale) will also be a factor in prioritisation.
Table showing risk and uncertainty against budget and profile
Risk and uncertainty | Budget and profile (low) | Budget and profile (medium) | Budget and profile (high) |
---|---|---|---|
Low | Level 1 | Level 2 | Level 2 |
Medium | Level 2 | Level 2 | Level 3 |
High | Level 2 | Level 3 | Level 3 |
Table notes
Description of levels:
- level 1: light-touch, could be internal evaluation, emphasis on monitoring
- level 2: consider commissioning externally, with appropriate budget allocation especially if helps overall evaluation of the relevant UK Space Agency priority area
- level 3: comprehensive, externally commissioned evaluation with appropriate budget
Budget thresholds:
- low: <£5 million
- medium: £5 million to £10 million
- high: £10 million and over
Budget and profile level descriptions:
- high: large programme with significant budget, and/or high profile with media and public interest, and potentially high impact
- medium: medium-sized programme with moderate budget, and/or some media and public interest, expected to have a sizeable impact
- low: small budget and/or limited public or media interest, with relatively low impact
Risk and uncertainty descriptions:
- high: complex programme design, and/or significant risk and uncertainty around programme outcomes
- medium: programme not especially complex or risky, but some uncertainty around outcomes
- low: straightforward, low-risk programme with low uncertainty around the outcomes
4.6 How we will address the specific space evaluation challenges we have identified
Time lag between programme delivery and impacts being realised
We will work with UK Space Agency colleagues leading work on benefits management to develop a robust approach to long-term monitoring of programmes, beyond the end of funding. Monitoring is key for projects where impacts take a long to be realised. This will enable both long-term evaluation but also give reassurance to funders and other stakeholders that the project is on track to deliver envisaged benefits.
In addition, we will use proxy indicators where this is possible and appropriate. Please see Annex A for more on the relationship between benefits management, realisation and evaluation.
Identifying a counterfactual
We will invest in data where it is lacking (administrative data, and survey/economic data). We will also invest in scoping and feasibility studies to explore the variety of ways to identify a counterfactual.
Through our new project management systems and processes we will influence programme design earlier, to maximise the opportunity to identify counterfactuals (for example, through delaying the start of programmes for cohorts to create a comparison group).
We are committed to upskilling those with responsibility for designing programmes in different types of evaluation design to maximise the evaluability of programmes.
Data challenges
Working with colleagues across government, we will develop a long-term and strategic approach to improve the quality of data through streamlining our surveys and making them more representative of the sector, and making them more targeted, with the granularity required to support evaluation of our programmes.
Difficulty to quantifying outcomes and complexity
Where counterfactual analysis is simply not possible or appropriate we will focus on doing robust theory-based evaluations, with qualitative approaches to impact evaluation. Through qualitative impact evaluations we can explore the contribution that our programmes have to intended outcomes, especially in the short-to-medium term. Programmatically we will also start to more routinely consider the appropriateness of longer-term re-contact to explore whether benefits and impacts are achieved much later.
We will invest in process, impact and value for money evaluations, recognising that the strongest programme evaluations combine all three. We will conduct research on how to monetise benefits, for meaningful cost-benefit analysis and value for money evaluations.
5. Appendices
5.1 Annex A
Interaction with monitoring, benefits management and realisation
- we have separated monitoring from evaluation for this strategy. We recognise that good evaluation is underpinned by good monitoring
- a lot of monitoring will be informed by our benefits framework; a catalogue of benefits that we will use across UK Space Agency programmes to promote consistency and facilitate comparison between programmes. Some additional monitoring for the purposes of evaluation may be required, which we will review on a project-by-project basis
- we consider benefits management and realisation to be complementary to evaluation, but distinct
- benefits management focusses on direct, easy to measure benefits. The purpose of it is to ensure benefits are being realised, and to change approach if not. It is routed in project management
- evaluation, on the other hand, is broader. It seeks to understand ‘what works, for whom, in what circumstances and why?’
- some benefits management/realisation work will be helpful for evaluation – and evaluators should be able to make use of it
- however, benefits management/realisation work will not provide everything an evaluator needs
- specifically, it will not provide insights evaluators need on outcomes and impacts that are less tangible and harder to define and measure. Here there might be additional monitoring requirements
- given the potential for overlap the relevant teams will work together to avoid duplication
- to include a visualisation of interaction/overlap between key terms
-
We will use our project management processes, business case development and quality assurance processes (discussed in section 3.6 to 4.2) to help us deliver this. ↩