Policy paper

DCMS Monitoring and Evaluation Strategy 2026 to 2031

Published 4 December 2025

Foreword from the Permanent Secretary: Susannah Storey

The Department for Culture, Media and Sport and our 41 arm’s length bodies (ALBs) have a vital role in supporting sectors fundamental to our economy and way of life. Our work is essential for driving growth, creating opportunities for all, and supporting a more socially cohesive country with an inclusive, positive national story.

It is crucial to use high-quality evidence in all our decision-making in order to deliver the most valuable interventions for the taxpayer. Our new Monitoring and Evaluation Strategy (2026 to 2031) is a vital commitment to developing a strong culture of evidence-based decisions, ensuring we monitor the implementation and evaluate the effectiveness of our interventions.

Building on the foundations laid by our previous strategy, our vision is clear: to enhance public value by improving our collective understanding of what works for people, places, and communities, valuing our entire nation. This strategy outlines a clear path to achieve this through three key objectives: improving the use of monitoring and evaluation in policy-making, strengthening the link between appraisal and evaluation, and supporting greater consistency and quality of monitoring and evaluation across our ALBs. 

Crucially, monitoring and evaluation is not a task for one team alone - it is a collective endeavour. A key shift in our new strategy lies in better supporting our ALBs, who deliver a significant proportion of our interventions, to strengthen their  monitoring and evaluation. Only by working together, within the department and across our ALBs, can we effectively improve the value and impact of our interventions. My senior leadership team and I are dedicated to taking an active role in realising the ambitions of this strategy.

Foreword from the Director of Analysis and Chief Economist: Urvashi Parashar

Often, when the words ‘monitoring and evaluation’ are mentioned, the first thing that springs to mind might be ‘this is all theoretical and best left to experts’, or ‘it is a burden and we cannot ask people to do more’ or even ‘we need it so we can defend our spending’. These are not unreasonable responses, or observations. However, what monitoring and evaluation really is about, is learning. It is how any effective organisation creates a virtuous cycle of setting a vision, delivering it and then evaluating its impact on a continuous feedback loop. It ultimately must involve organisational leaders to devise and own this feedback loop, and demand that the right and credible data is generated to measure performance and hold ourselves to account.

In government, the imperative to get this right is even greater, given that we must demonstrate we are always delivering value for money for taxpayers, and for society. We take this intent seriously, having published our first Monitoring and Evaluation Strategy in 2022 and initiated a whole programme of work to break down barriers to monitoring and evaluation. We have expanded our coverage of evaluations of our highest priority interventions and have introduced innovative methods to do so, embracing the culture of transparency and credibility by utilising the Evaluation Task Force’s Registry. Policy professionals and analysts have worked as one in evaluating significant interventions, such as the Grassroots Multi-Sports Facilities programme and Commonwealth Games, using its learning to inform future intervention design and delivery.

We are taking a further decisive step with this new strategy, where our focus will continue to be on our ability to demand, and use, evaluative learning to understand our ‘golden thread’ to impact for our businesses, places and citizens. This will enable our ministers to have a clearer line of sight between priorities and outcomes. In doing so, we are turning our attention to building evaluation capability within our biggest arm’s length bodies who do the important work of ensuring our funding reaches those most in need. We are also committing ourselves to joining up appraisal and evaluation, so accountability rests in measuring the extent to which we delivered what we set out to do.

In all of this, we continue to acknowledge the importance of monitoring and evaluation as a learning tool, not seeing this as a mere technocratic exercise. This strategy is principles based and proportionate, collaborative and open, and focused on rigour. This is balanced with creating the right incentives and conditions for us, our ALBs and our stakeholders, to learn from what works in designing and delivering impactful policy.

Champions of the strategy

Head of Profession for Social Research: Liz Jones

Robustly evaluating our policies and programmes is essential for iteratively improving our interventions and ensuring we are providing public value. This M&E strategy will ensure that we and our ALBs are doing this consistently, and that we are using what we learn in developing future policies. There are great examples of robust M&E in the department, and this strategy encourages us to build on this through strengthening and sharing our skills and experience. I welcome the encouragement for us to work collaboratively, across professions and organisations, to achieve the strategy’s objectives.

Head of Profession for Economics: Rory Constable

Strengthening the link between appraisal and evaluation is crucial for ensuring we maximise the value delivered from public funds. This strategy signifies our serious intent to use evidence to drive informed spending decisions. Crucially, we recognise that our arm’s length bodies are integral partners who deliver a significant proportion of our interventions, and their success is our success. This strategy is therefore designed as a collective endeavour, focused on providing high-quality capacity building, such as the Evaluation Academy, to help the department and our ALBs improve our monitoring and evaluation capability in a supportive environment. This collaboration is about building learning into delivery to enhance public value, not about imposing new layers or rigid audit requirements.

Deputy Director of Youth Services: Kirby Swales

As policymakers, we know that robust evidence is the cornerstone for making effective, high-impact decisions. Monitoring and evaluation is important for ensuring our interventions are generating real public value. Whilst there are operational challenges of embedding monitoring and evaluation into policymaking, I am genuinely delighted by the improvements set out in this new strategy. Crucially, this strategy emphasises how we must use monitoring data and evidence during delivery of interventions to adapt and course-correct where needed. I call on all of our policy teams and analysts to collaborate on monitoring and evaluation to ensure that evidence truly informs every big decision we make.

Head of Public Bodies Team: Richard Bice

I support this new strategy because it offers a strong commitment to collaborating closely with our arm’s length bodies; reflective of the critical role they play in delivering our overarching objectives. Many arm’s length bodies already conduct invaluable monitoring and evaluation activities as part of delivering our interventions and strengthened partnership working is crucial to understanding and effectively improving the value and impact of our initiatives. This strategy is built on the principle that by working together, we can leverage our collective expertise, resources and insights to maximise our positive outcomes. To foster this enhanced collaboration, it sets out pragmatic steps designed to facilitate mutual learning and the widespread sharing of best practices. This collaborative approach will ensure that our initiatives are not only well-executed, but also continuously refined and improved based on real-world evidence and shared expertise.

Summary

The Department for Culture, Media and Sport (DCMS) is a UK ministerial department that supports culture, arts, media, tourism, youth and civil society. We work with 41 arms-length bodies (ALBs) to deliver activities within sectors that are vital to the UK’s economy, way of life and reputation around the world.

Our previous strategy, which ran from 2022 to 2025, focused on laying strong foundations for monitoring and evaluation. We made significant progress on embedding an evaluation culture at our core and building and maintaining a robust evidence base.

Our new strategy aims to enhance public value by improving our collective understanding of what works for people, places, and communities. We will achieve this by working collaboratively and closely with our ALBs and ensuring interventions are designed and adapted using monitoring and evaluation evidence.

Through our new strategy, we aim to:

  1. Improve the use of monitoring and evaluation in policy making

  2. Strengthen the link between appraisal and evaluation

  3. Enhance collaboration with our ALBs to learn and share best practice across the entire departmental portfolio

This document reports progress on our previous strategy, the challenges that remain, and our priorities for the future.

1. Introduction

The DCMS is a UK ministerial department that supports culture, arts, media, tourism, youth and civil society. We work with 41 ALBs to deliver activities within sectors that are vital to the UK’s economy, way of life and reputation around the world. Learning from monitoring and evaluation is essential to ensuring our interventions are effective and deliver the best outcomes. 

We are driven by our 3 departmental objectives: 

  1. Growth and good jobs in every place. Directly supporting the government’s mission to kickstart economic growth. For instance, through increasing our investment into businesses within the Creative Industries from £17 billion to £31 billion by 2035, as published in our 10-year Creative Industries Sector Plan.

  2. Richer lives with choices and opportunities for all. Contributing significantly to the Opportunity mission and the Health mission. In the 2024/25 financial year, £125 million was invested into grassroots facilities and in June 2025, £900m has been secured for grassroots and major events

  3. A more socially cohesive country with an inclusive national story. We have a vital role to play in the government’s Safer Streets and Social Cohesion missions. The Dormants Assets Scheme, which channels unclaimed money from dormant financial accounts into government initiatives, unlocked £440 million to be used for social causes in the Civil Society and Youth services sector. Furthermore, the BBC Charter Review and Local Media Strategy will create a high-quality UK media ecosystem that is trusted by and accountable to the public. 

We are committed to putting communities and places first, with local voices shaping an inclusive national story. We must use the best evidence available to inform the implementation and delivery of our interventions. Whether it is the UK’s theatres, sport complexes or charitable organisations, our analysis-driven approach is what creates a globally recognised offer for our citizens.

2. Purpose of the strategy

Our strategy will guide how we approach monitoring and evaluation as a department and with our ALBs. It will be of interest to our stakeholders and the wider public. This document reports progress on our previous strategy, the challenges that remain, and our priorities for the future.

Recognising the importance of collaboration, we have developed this strategy through extensive engagement with over 40 stakeholders. This includes analysts, policy professionals, ALB sponsors and the Evaluation Task Force (ETF). These conversations not only informed our objectives and actions, but have created a sense of shared ownership within the department.

3. Importance of monitoring and evaluation in supporting our departmental objectives 

Together, monitoring and evaluation helps us understand what works and why. As per the ROAMEF cycle, which outlines the policy development process, effective monitoring and evaluation helps us systematically plan, implement, review, and use feedback loops to refine our interventions. It is crucial in holding us accountable for our performance against our departmental objectives. It helps us adapt the delivery of our interventions to improve impacts, ensure spending decisions are informed by strong evidence, and be accountable to the taxpayer. Ultimately, effective monitoring and evaluation allows us to deliver the best outcomes for people, places and communities.

Appraisal

Appraisal is fundamentally connected to monitoring and evaluation and is first undertaken when exploring options for government interventions. The Green Book provides guidance on appraisal in government and defines it as:

The process of assessing the costs, benefits and risks of alternative ways to meet government objectives. It helps decision makers to understand the potential effects, trade-offs and overall impact of options by providing an objective evidence base for decision making

(Green Book, HM Treasury)

Monitoring data

The Evaluation Task Force defines monitoring data as data:

[…] collected throughout an intervention to provide answers to a number of policy, research and performance questions. This data typically covers all aspects of an intervention’s operation and is generally used to help track progress of an intervention’s delivery

(Evaluation Academy, The Evaluation Task Force)

Evaluation

The Magenta Book provides guidance on evaluation in government and defines evaluation as:

A systematic assessment of the design, implementation and outcomes of an intervention. It involves understanding how an intervention is being, or has been, implemented and what effects it has, for whom and why. It identifies what can be improved and estimates its overall impacts and cost-effectiveness.

(Magenta Book, HM Treasury)

It is because of robust monitoring and evaluation practice that we can demonstrate the impact and value of our interventions, which subsequently inform the design, delivery, and effectiveness of future interventions. For example, the Multi-Sport Grassroots Facilities Programme evaluation showed how the programme has led to a higher proportion of funded facilities (50%) reporting an increase in sustained participation since April 2021 relative to unfunded sites (39%). Case studies have also shown significant uplifts in participation, particularly from younger people and women and girls, and presented numerous improvements in wider impacts and outcomes such as ‘pride in place’. Evaluative evidence is at the core of our work through spending reviews where we reprioritise spend to find efficiencies and drive value from our interventions. 

Delivering strong monitoring and evaluation relies on meaningful collaboration between stakeholders. From the earliest stages of designing an intervention through to dissemination of findings, building our evidence base is a shared responsibility across the department and our ALBs.

4. How we monitor and evaluate our progress against departmental objectives

Monitoring and evaluation is crucial for assessing our performance against departmental objectives, as set out in our Impact Framework. This is our departmental-level theory of change which outlines how our interventions are intended to lead to our outcomes and objectives. The Impact Framework includes associated metrics, largely drawing upon our portfolio of official statistics and surveys, for measuring and reporting success.

The fundamental building blocks for understanding our success starts with monitoring and evaluation of our interventions. We must ensure we have the right data to monitor and evaluate our impact, and importantly, use this data to adapt our interventions based on evidence of what works. Alongside this, we are prioritising boosting our data infrastructure. We have launched our new Place Analysis Tool, which enables us to undertake instant place-based analysis with sub-regional data related to our funding, assets and outcomes. Combining these data sources, we ensure we are best able to draw insights that support progress and monitoring against our objectives.   

5. Our approach to prioritisation 

Monitoring and evaluation activity should be proportionate. We should prioritise resources and effort towards the questions that matter most and where evidence is weaker. 

We are committed to robustly monitoring and evaluating our Government Major Projects (GMPs); these are automatically prioritised. The Government Major Projects Portfolio (GMPP) is a collection of the government’s most complex and strategically significant projects. All of our GMPs have a monitoring and evaluation plan or are in the process of developing one, as recognised by the Evaluation Task Force’s review (2025). Some of our GMPs are led by our ALBs on our behalf. When available, monitoring and evaluation plans and reports are published on the Evaluation Registry. Our GMPs with current or upcoming proportionate monitoring and evaluation plans are listed as follows:

  • Natural History Museum Unlocked
  • Youth Investment Fund
  • British Museum Energy Centre Programme
  • Euro 2028
  • 4th National Lottery Licence Competition 

For other interventions, we use different criteria to guide our decisions on monitoring and evaluation priorities, including overall intervention cost, whether the intervention is innovative or novel, the strength of the existing evidence base, cost of the evaluation activity, and the intervention’s importance. 

6. Our commitment to best practice

To ensure our approach to monitoring and evaluation follows best practice, we are committed to following His Majesty’s Treasury’s (HMT) Magenta Book guidance on evaluation and HMT’s Green Book guidance on appraisal and evaluation.

In line with the Magenta and Green Book, we are improving our monitoring and evaluation capabilities for place-based interventions and putting specific emphasis on spatial considerations in evaluating impact of our programmes. We will also upload all of our evaluations onto the Evaluation Registry and will encourage our ALBs to do so, in line with official guidance. The Evaluation Registry is an online record of planned, ongoing and complete government evaluations from all departments, owned by Cabinet Office and HMT.

7. Our engagement with monitoring and evaluation experts

We will continue to draw on the expertise of external monitoring and evaluation professionals through our Evaluation Methods Seminar Series, our College of Experts and the Evaluation Task Force’s Evaluation and Trial Advice Panel. In particular, we are engaging place-based evaluation experts to improve monitoring and evaluation of our place-based interventions. External experts bring valuable technical skills that strengthen the quality of our evidence. 

8. Our progress against monitoring and evaluation challenges 

Our progress 

Our previous strategy, which ran from 2022 to 2025, focused on developing strong foundations for monitoring and evaluation. It set out a vision for embedding an evaluation culture at our core and building and maintaining a robust evidence base, ensuring:

  • monitoring and evaluation evidence produced is timely, proportionate and needs driven
  • monitoring and evaluation is consistently, reliably and confidently used in decision making 
  • monitoring and evaluation is firmly embedded across the organisation 

We have successfully implemented many initiatives, including the creation of a new in-house evidence log, regular reviews of monitoring and evaluation risks, and annual reviews of our highest priority evaluations. All of our GMPs have plans for monitoring and evaluation. Our progress has been summarised in Table 1. We have also been exploring opportunities to improve our data infrastructure through data linkage and identify potential ethical and appropriate applications of AI within our evaluation methods.

Table 1: DCMS Monitoring and Evaluation Strategy (2022) progress update

Core objective Aim Progress
1. Prioritisation and proportionality Evaluation activity should be proportionate. There is a need to balance good quality M&E against other priorities to ensure that value for money is delivered for the taxpayer. Priority evaluations are identified early and teams have adequate resources and capability to deliver these. Our internal monitoring and evaluation tracker improves department-wide evaluation delivery monitoring, enabling us to identify risks. We undertake bi-monthly meetings with analytical teams to track progress and address risks. We have formal and informal routes to provide monitoring and evaluation support. The Evaluation Community remains a successful initiative to overcome challenges to monitoring and evaluation.
2. Operationalising monitoring and evaluation Monitoring and evaluation should be incentivised. M&E will be embedded into corporate governance and approvals so it is part of the key decision-making processes. Any deviations from the requirement for the consideration of M&E are escalated. We have set requirements for all business cases and impact assessments to include a Theory of Change and comprehensive SMART objectives to define success and identify risks early. There are requirements for Post-Implementation Review findings to be shared. All priority evaluation projects are overseen by an advisory/steering group, with standard evaluations encouraged to have a proportionate steering group.
3. Capability building There will be a departmental wide culture of learning, including advancing evaluation techniques. Evaluation Community members will share expertise. We will develop a learning curriculum and a competency framework. We have successfully delivered the Evaluation Task Force’s Evaluation Academy training to analysts and policy professionals. The competency framework will be refined to incorporate evolving methods, like AI. Teams now access sufficient evaluation expertise internally, through our College of Experts and the Evaluation Task Force’s Evaluation and Trial Advice Panel.
4. Evidence building and feedback To build feedback and learning at the core of departmental activities to achieve a stronger M&E culture. The Evaluation Registry is our key communication tool for sharing our evaluations. There is an internal evidence log which summarises key findings. We learnt that instead of newsletters, seminars are a more engaging way of presenting findings. Priority evaluations receive a debrief within one month of their final report publication.

The importance of further progress in monitoring and evaluation in our department

Our work is not over yet. Through extensive engagement, we gathered valuable insights into the key barriers to and facilitators of effective and proportionate monitoring and evaluation. Overall, these challenges are not uncommon and reflect the complexity of evaluating government interventions.

1. Embedding monitoring and evaluation consistently into policy development and delivery

It is important that monitoring and evaluation activity in the department is consistently and effectively embedded throughout the policy lifecycle. In time-pressured environments, there is a risk that monitoring and evaluation is seen as an ‘add on’ rather than an integral part of policy design and delivery. This may also mean we focus only on evidence directly related to our individual interventions, missing opportunities to review and learn from wider evaluative evidence in related sectors or internationally.

Focusing on monitoring and evaluation early and using tools like theories of change and logic models ensures policy is designed in tandem with robust monitoring and evaluation plans. In particular, improving how monitoring data is set up and used can enable more agile delivery, helping teams adapt interventions in real time. We will also make more use of generalisable findings of what works from similar interventions in other sectors or internationally.

2. The importance of feedback loops and aligning appraisal and evaluation

There is an opportunity to create a more coherent approach across the policy cycle by improving alignment between appraisal and evaluation. When undertaking appraisal, we need to create opportunities to make full use of existing evaluation evidence when designing new interventions. We should also ensure those leading evaluations are familiar with the initial appraisal that was undertaken. This lack of join up results in missed opportunities to ensure we consistently test whether the estimated value of an intervention is realised.

As a result, we do not always compare the results of our intervention directly against what we originally expected it to deliver. Encouraging greater collaboration between these functions will improve our ability to compare expected and actual outcomes.

3. Greater collaboration with our ALBs to learn and share best practice

Our ALBs are engaged in valuable monitoring and evaluation that is essential for understanding the collective impact we have on our sectors. To enhance this, we will work more closely with our ALBs providing more structured support on monitoring and evaluation and creating opportunities to learn from each other and share best practice.

Consistent with the challenges we face in the department, approaches and quality can vary. In some cases, monitoring and evaluation is viewed as a compliance or audit function. We want to work with our ALBs to change this perception and to emphasise the importance of monitoring and evaluation as a learning tool for guiding delivery and improving outcomes. Our ALBs are integral partners and vital to our departmental outcomes and the success of this strategy. With approximately three quarters of our total spend being delivered by our ALBs, we must work closely together to drive up monitoring and evaluations standards and ensure we are maximising the value of every pound across our entire portfolio.

Currently, the support we offer our ALBs on monitoring and evaluation is informal; we want to change this. There is an opportunity to be more collaborative, offering clearer expectations, dedicated support, and sharing of best practice to build our collective evidence base. Given the diversity of our ALBs, it is important that we tailor support to individual needs and create opportunities for learning from those delivering good monitoring and evaluation.

9. Our Monitoring and Evaluation vision

Our new strategy aims to enhance public value by improving our collective understanding of what works for people, places, and communities. We will achieve this by working collaboratively and closely with our ALBs and ensuring interventions are designed and adapted using monitoring and evaluation evidence. We want to evolve as a department. Collaboration between stakeholders is at the heart of successful monitoring and evaluation and successful evaluation is a shared responsibility, requiring active engagement from policy teams, ALBs, and wider stakeholders.

We are also committed to getting better at monitoring our interventions in flight. High-quality, timely monitoring data is fundamental to informing policy decisions and conducting robust evaluations. We must ensure we have the essential data infrastructure in place to monitor impact quickly and effectively.

Our strategic objectives and activities

We will assess success against 3 objectives:

  1. Embedding monitoring and evaluation more consistently into policy development and delivery.

  2. Strengthening the link between appraisal and evaluation.

  3. Enhance collaboration with our ALBs to learn and share best practice across the entire departmental portfolio.

Each objective is supported by actions, developed in response to challenges raised during our engagement sessions with stakeholders. For each action, we have outlined a summary of the challenge(s) it aims to address and the intended outcomes. We have produced a logic model, including alternative text to support accessibility, which visually maps how our actions will deliver the intended outcomes (see Figure 1).

Figure 1: DCMS Monitoring and Evaluation Strategy logic model

Use zoom on your browser to view.

Figure 1: full text description

Figure 1 is an illustration of a logical model, depicting the causal links between our strategy’s activities and intended outcomes. This outlines what we will implement and how this will lead to our intended outcomes. It details the strategy’s activities (what we will do with the resources committed), the outputs (the direct result of implementing the activities), the intermediate outcomes (these are the mid-term effects of the outputs, typically changes in learning and actions) and the long term outcomes (changes in conditions that are the ultimate goal of our strategy).

To deliver against Objective 1 (embedding monitoring and evaluation more consistently into policy development and delivery) we will:

  • co-produce training to support policy professionals
  • create guidance and tools to support the development of monitoring frameworks
  • develop Evidence-to-Action workshop tools

This will lead to the following intermediate outcomes:

  • improved M&E capabilities across our department and our ALBs
  • improved understanding of roles and responsibilities
  • improved use and understanding of monitoring and evaluation evidence

Finally, this will contribute towards the following long-term outcomes:

  • improved M&E team engagement with stakeholders
  • the department and our ALBs producing more robust evaluations 
  • improved accessibility and dissemination of evaluation evidence
  • evaluation evidence continually shaping policy design and decisions
  • more proportionate evaluation activity respectively
  • pilot the Value for Investment (VfI) approach
  • better integrate appraisal and evaluation functions in decision making, through adapted impact assessment and business case templates and ensuring appraisal information is shared with evaluation suppliers

This will lead to the following intermediate outcomes: 

  • improved M&E capabilities across the department and our ALBs
  • improved harmonisation of appraisal and VfM evaluation practices

The final long-term outcomes include:

  • improved M&E team engagement with stakeholders
  • the department and our ALBs producing more robust evaluations
  • more proportionate monitoring and evaluation activity
  • evaluation evidence continually shaping policy design and decisions
  • improved accessibility and dissemination of evaluation evidence
  • a strengthened link between appraisal and evaluation

To deliver against Objective 3 (enhance collaboration with our ALBs to learn and share best practice across the entire departmental portfolio) we will:

  • pilot the Evaluation Academy with our ALBs
  • introduce a quarterly ALB forum

This will lead to the following intermediate outcomes:

  • improved use and understanding of M&E evidence
  • improved harmonisation of appraisal and VfM evaluation practices
  • improved understanding of ALB M&E activity

The final long-term outcomes include:

  • improved M&E team engagement with stakeholders
  • the department and our ALBs producing more robust evaluations
  • evaluation evidence produced is more aligned with the department’s strategic priorities
  • M&E activity is more proportionate
  • improved accessibility and dissemination of evaluation evidence
  • evaluation evidence continually shaping policy design and decisions
  • the link between evaluation and appraisal is strengthened

Objective 1: Embedding monitoring and evaluation more consistently into policy development and delivery

To fundamentally shift our approach to monitoring and evaluation, we must ensure monitoring and evaluation is prioritised early in the policy cycle. We will work with senior champions in the department to embed this principle. By securing senior sponsorship, we will reinforce the message that learning from previous evaluative evidence and designing robust monitoring and evaluation activities is an integral part of high-quality policy design.

Action 1a: Enhance our evidence-led policy culture and co-develop training with policy professionals to support understanding of the importance of monitoring and evaluation in driving outcomes (by March 2027).

We will co-develop and deliver dedicated training and tools to policy professionals, drawing on the ETF’s materials. Co-developing these resources is important for generating a mutual understanding of where monitoring and evaluation can most effectively support policy design and delivery. This initiative aims to improve our monitoring and evaluation capability, understanding of roles and responsibilities, and use and understanding of evidence throughout the policy cycle. The training will equip us with practical tools, including how to develop robust theories of change and effective monitoring frameworks, particularly for place-based interventions. This approach ensures we enhance our collective understanding of where evaluative learning can improve policy design and delivery to maximise outcomes for people, places and communities.

Action 1b: Produce monitoring and dissemination resources and utilise our governance processes to enable more agile, iterative delivery (by May 2026).

To embed a culture where monitoring and evaluation is valued and actively used, we will focus on developing practical tools and ensuring evidence reaches the right people at the right time to inform policy decisions. 

We will produce practical resources and utilise existing governance processes to help us and our ALBs design effective monitoring frameworks and use evidence to inform in-flight decision making. This aims to improve monitoring and evaluation capability across the department and our ALBs, understanding of roles and responsibilities, and the use and understanding of evidence.

This will include:

  • Practical guidance for design and use of monitoring frameworks: ensuring monitoring data is sufficiently captured and used to inform decision making. This will also include specific guidance on collecting monitoring data for place-based evaluations.
  • Setting clear expectations for monitoring: refreshing our monitoring requirements documentation for business cases and impact assessments to scrutinise monitoring plans more consistently and ensure accountability for active use of evidence in policy development.
  • Sharing best practice: providing examples of what good monitoring frameworks looks like in practice, including monitoring frameworks for place-based interventions.
  • Facilitating evidence application: creating tools, such as ‘Evidence-to-Action’ workshops, which aim to facilitate the application of monitoring and evaluation evidence for informing in-flight delivery decisions.

Join up between appraisal and evaluation is crucial for continuous learning. Better integrating these two processes ensures we capture and learn from any differences between the estimated and actual value of our interventions. It will create more opportunities to use evaluation evidence to improve appraisals when designing new interventions and allow us to use information from appraisals to maximise what we learn from monitoring and evaluation. In doing this, we should ensure we are aligned with existing guidance, including the Green Book, Magenta Book, and the Culture and Heritage Capital (CHC) Programme. In particular, the CHC Framework is used within our appraisals and evaluations to value the cost and benefits of our interventions in the creative, culture, and heritage sectors.

Action 2a: Linking appraisal and evaluation within our core decision making processes (by May 2026)

We will identify opportunities to facilitate the link between appraisal and evaluation within our core decision making processes. To deliver this, we will: 

  • build capability across our multi-disciplinary analysis function such that those responsible for writing economic cases are also well versed in evaluative approaches and vice versa.
  • encourage collaborative working: at the appraisal stage of an intervention, we will actively encourage cross-discipline working. This early collaboration will  foster shared ownership of the intervention’s success and ensure the monitoring and evaluation plan directly measures whether the estimated costs and benefits have been realised.
  • share appraisals with those undertaking evaluations: outputs of appraisals, showing estimated benefits of an intervention, will be shared with evaluation suppliers for our externally commissioned evaluations. This will help to integrate the strategic objectives and estimated benefits from the original business case and impact assessment into monitoring and evaluation plans. 
  • embed evidence into decision-making: our internal evidence log and the Evaluation Registry will be promoted to encourage us to better embed evaluative evidence in our early-stage policy development. We will also explore opportunities to improve our evidence log through the use of AI to identify and synthesise evidence across our sectors.

Action 2b: Pilot the Value for Investment approach within our evaluations, including opportunities to build into our business case and impact assessment processes (by December 2026).

We will pilot the OPM/King Value for Investment (VfI) approach within a small number of our evaluations and, if successful, build the VfI principles into our business case and impact assessment processes. The VfI approach directly supports our goal of harmonising appraisal and evaluation, subsequently strengthening the link between appraisal and evaluation. By collaboratively defining success at the appraisal stage, VfI ensures that subsequent monitoring and evaluation activities measure what was agreed by stakeholders and decision makers as the intervention’s value proposition. This strengthens shared ownership and consistency between appraisal and evaluation. 

This approach is not a method, but a framework for understanding the broader value of interventions. Its core strength is the use of participatory approaches with stakeholders to more clearly define success at the start of intervention design. This is typically undertaken using rubrics, a scoring tool which outlines the criteria for success and the standards the intervention is expected to meet. Mixed-methods evidence collected through evaluation is then applied to the rubrics framework to assess an intervention’s value holistically. It does not replace standard economic metrics, but provides a framework for interpreting them, alongside clearer standards for success across other aspects of the intervention.

Due to being a more novel approach to evaluating the value of our interventions, the pilot is an opportunity to test, learn and build internal expertise. Depending on the success of the pilot, this will involve:

  • reviewing business case and impact assessment templates to embed VfI principles
  • building upon existing resources to develop clear guidance on what VfI is and how to apply it in practice
  • identifying opportunities for monitoring and evaluation plans to include a VfI approach

Objective 3: Enhance collaboration with our ALBs to learn and share best practice across the entire departmental portfolio.

ALBs have an important role in delivering our shared objectives. Many of our ALBs undertake strong monitoring and evaluation activity. We would like to build upon this existing expertise, working with and supporting our ALBs in sharing best practice. In addition, we will be partnering with a few ALBs to help them develop and publish their own monitoring and evaluation strategies. Guidance and support will be available to all other ALBs who would like to develop a strategy too. By working together, we can generate strong evidence to maximise positive outcomes for people, places and communities.

Action 3a: adapt and pilot the Evaluation Academy with a sample of our ALBs (by December 2026).

The Evaluation Academy is a high-quality training offer launched by the ETF and is tailored by departments across the UK government. It is currently delivered within the department and covers introductory topics such as scoping an evaluation, evaluation methods, and communicating evidence to inform decision-making. 

The Evaluation Academy will be adapted and piloted with a sample of our largest ALBs. Extending this training to our ALBs aims to improve their evaluation capabilities, enhance understanding and use of evaluative evidence, and clarify roles and responsibilities in monitoring and evaluation. We hope this also improves our understanding of monitoring and evaluation activities undertaken by ALBs and the challenges they face so that we can better support them.

Action 3b: initiate a quarterly ALB M&E forum to communicate evaluation standards, share guidance and learning opportunities, and share evidence (by May 2026). 

We will launch a quarterly forum for those within ALBs working on monitoring and evaluation. This forum will be a supportive and collaborative development space, focused on strengthening capabilities, overcoming challenges, and sharing good practice. The objectives of the forum will be to:

  • share central monitoring and evaluation guidance 
  • promote best practice in monitoring and evaluation across ALBs 
  • support monitoring and evaluation learning and capacity building within ALBs 

Through this forum, we aim to strengthen evaluation capabilities, clarify roles and responsibilities in monitoring and evaluation, and work together to improve our collective understanding of the impact of our intervention in the sectors we jointly support.

Our governance and resourcing for monitoring and evaluation

The majority of our monitoring and evaluation activities are commissioned and undertaken independently by external experts. Our monitoring and evaluation activity is governed by several advisory and assurance processes. These are detailed below.

When designing and commissioning monitoring and evaluation projects:

  • we offer methodological advisory support across the department via our internal panel of analytical advisors. In some cases, we engage with external experts via the ETF’s Evaluation and Trial Advice Panel and our College of Experts
  • analytical leaders review and approve monitoring and evaluation plans prior to procurement
  • all of our monitoring and evaluation plans are inputted into our internal monitoring and evaluation tracker and uploaded onto the Evaluation Registry. Priority evaluations are identified early and subject to additional assurance.

During project delivery:

  • our Monitoring and Evaluation Team reviews the progress of monitoring and evaluation projects with teams via bi-monthly meetings. Risks are identified and solutions are discussed together.
  • quality assurance processes are followed to ensure draft outputs are methodologically robust and fit for purpose
  • for our high priority projects, a steering and/or advisory group is created and provides expert technical advice on the monitoring and evaluation project throughout delivery

When publishing monitoring and evaluation reports:

  • monitoring and evaluation reports are reviewed by analysts, policy professionals, and senior leaders to assure the quality of our reports and promote transparency in our methods. For our high priority projects, our Director of Analysis provides final assurance prior to publication. 
  • the Government Social Research Publication Protocol is adhered to by all of our monitoring and evaluation projects. This protocol sets out five principles that should be adhered to in the publication and release of all government social research products.

Published monitoring and evaluation reports are uploaded to the Evaluation Registry and are actively disseminated.

10. Assessing our progress and success

The implementation and monitoring of the strategy is the responsibility of our Director of Analysis (see Table 2). We will publish an interim progress update and will measure success through a combination of quantitative and qualitative data (see Table 3).

The likelihood of achieving the intended outcomes is reliant on collaboration between multiple stakeholders. Table 2 details core governance responsibilities of our stakeholders. There will be two main governance mechanisms:

1. Quarterly monitoring and challenge panel: We have assigned a senior group of champions who will attend a quarterly monitoring and challenge panel. This group will receive progress updates, steer our activities, and support us in overcoming implementation barriers. 

2. Annual performance meeting: We will present annually to the department on whether we are achieving our intended outputs and outcomes.

These two mechanisms will be our main check-ins with senior leaders and the department to discuss the implementation and success of our strategy.

Table 2: Core governance responsibilities of our stakeholders

Stakeholder Core responsibilities
Director of Analysis Overall responsibility of the implementation of the strategy.
Senior Strategy Champions Advocate and champion the strategy with the Department and our ALBs, promoting the value of robust monitoring and evaluation.
Encourage their teams and ALBs to undertake learning and development on monitoring and evaluation.
Promote collaboration between policy professionals and analysts to design robust monitoring and evaluation.
Provide support in overcoming challenges to implementing the strategy.
Central Analysis Team Promote the evaluation strategy with the Department and our ALBs, working collaboratively with teams.
Monitor the implementation and impact of this strategy, using feedback and data to adapt delivery where required.
Ensure that the Department and our ALBs are equipped with the right skills, knowledge and behaviours to successfully implement the strategy.
Monitor and report risks to implementing the strategy to the Analytical Leadership Team to identify mitigations.
Engage with other government departments, including the Evaluation Task Force, to overcome challenges to implementation and share lessons learnt.
Evaluation Community and Analysts To build monitoring and evaluation expertise throughout the Department and our ALBs.
Advocate for the robust use of monitoring and evaluation methods and promote best practice within the Department and our ALBs.
Provide monitoring and evaluation advice.
Engage policy professionals to ensure monitoring and evaluation is considered throughout intervention design and delivery.
Policy Professionals Develop an awareness of the strategy and understand the importance of timely and robust monitoring and evaluation.
Understand how to develop a theory of change and/or logic model.
Ensure they engage with their analytical colleagues in a timely manner to consider monitoring and evaluation at the start of intervention design.
Support the co-production of training to improve the policy professions’ knowledge of monitoring and evaluation.
Participate in ‘Evidence-to-Action’ workshops to use monitoring and evaluation to inform future intervention delivery.
ALB Sponsors Promote and support the implementation of the Evaluation Academy and ALB Monitoring and Evaluation Forum.
Advocate for the use and delivery of robust monitoring and evaluation within ALBs.

Table 3: Summary of our approach to monitoring the success of our strategy

Long-term outcome Indicator
Improved Monitoring and Evaluation Team engagement with stakeholders Annual monitoring and evaluation attitudes and perception survey, including response-rate from a wide-range of stakeholders the department and our ALBs. This will include questions focusing on perceptions engagement by the Monitoring and Evaluation Team and the department.
Quality of monitoring and evaluation plans using a quality assessment tool.
Successful output indicators: Attendance and (ad-hoc and formal) feedback from the Evaluation Academy, ALB Monitoring and Evaluation forum, and Policy training sessions.
DCMS and ALBs produce more robust evaluations Annual monitoring and evaluation attitudes and perceptions survey. This will include questions focusing on internal and wider capability, and roles and responsibilities within monitoring and evaluation.
Quality of monitoring and evaluation plans using a quality assessment tool.
Evaluation evidence produced is more aligned with DCMS strategic priorities Annual monitoring and evaluation attitudes and perception survey. This will include questions focusing on addressing evidence gaps and the department’s strategic priorities.
M&E activity is more proportionate Annual monitoring and evaluation attitudes and perception survey. This will include questions focusing on how proportionate features of monitoring and evaluation activity are.
Successful output indicators: Monitoring framework and ‘Evidence-to-Action’ workshop resources developed and shared.
Improved accessibility and dissemination of evaluation evidence Successful output indicators: ‘Evidence-to-Action’ workshop materials and number of sessions.
Increase in the department and our ALB’s uploads to the Evaluation Registry, including tracking specific increases on approaches and methodologies.
Annual monitoring and evaluation attitudes and perception survey. This will include questions focusing on monitoring and evaluation outputs and dissemination of monitoring and evaluation activity in the department, across our ALBs and to the general public.
More robust evaluation evidence is used in decision making Annual monitoring and evaluation attitudes and perception survey. This will include questions focusing on the use of robust evaluation evidence in the department and our ALBs.
Attendance and feedback from pre- and post- policy training and Evaluation Academy sessions on skills and confidence in monitoring and evaluation.
The link between ex-ante appraisal and ex-post evaluation is strengthened Annual monitoring and evaluation attitudes and perception survey. This will include questions focusing on appraisal and evaluation, the ROAMEF cycle and cross-profession working.
Feedback from those participating in the VfI pilot through internal advisory and dissemination activity.
Successful output indicators: Revision of business case and impact assessment templates, VfI pilots and training, and the implementation of a formal system for ensuring retention of appraisal approaches throughout an intervention’s lifecycle.