GCRF value for money assessment: executive summary (Year 5)
Updated 15 September 2025
This report presents the findings from the second and final Value for Money (VfM) assessment of the Global Challenges Research Fund (GCRF). Assessing VfM provides insight into how resources are used and whether their outcomes and impacts justify the resources invested. This report provides a summative assessment of VfM for the Fund, ensuring accountability for investments made and advancing VfM assessment methodologies, generating lessons for future application in official development assistance (ODA) research and innovation (R&I) funds. The assessment found that 89% of the sampled GCRF awards demonstrated adequate, good, or excellent performance, meeting, and in many cases exceeding GCRF’s VfM standards. The assessment concluded that the results produced by the Fund represented good value for the investment. The sampled awards were diverse and showed different strengths but, as a whole, generated substantial value through a balanced set of outcomes in line with the Fund’s strategic aims.
The Global Challenges Research Fund
Launched in 2016, GCRF was a £1.5 billion R&I fund, managed by the United Kingdom’s (UK’s) Department for Science, Innovation and Technology (DSIT). It was established to support the United Nations Sustainable Development Goals (SDGs) and aims to enhance research excellence, international research partnerships – particularly with low and middle-income countries (LMICs) – and impact-driven research.
This report was produced in 2024. The GCRF has now closed. Since then, the government has taken the difficult decision to temporarily reduce Official Development Assistance (ODA) to the equivalent of 0.3% of GNI by 2027 to fund an increase in defence spending. The government remains committed to international development and to returning ODA to 0.7% of GNI when fiscal conditions allow.
Assessing VfM in GCRF
This report presents a summative assessment of VfM in GCRF, culminating five years of evaluation activities and evidence, and comprises two analyses. To support accountability for investments made, the first is a summative analysis of VfM in GCRF, aggregating quantitative data from two samples of GCRF awards assessed over two years (2024[footnote 1] and 2025). The second analysis is a standalone assessment of 31 awards conducted in the final year of the evaluation (2025). This analysis leverages output and outcomes data collected in previous stages of the evaluation to provide a stronger evidence base for assessment, expanding our understanding of how VfM is realised in GCRF as the fund matures. As part of the 2025 VfM assessment, we also present a qualitative analysis of the drivers of VfM in GCRF, including the enablers and barriers to value generation, building the knowledge base of how VfM is realised in ODA R&I funds.
To evaluate VfM in GCRF, we developed an innovative rubric-based approach in collaboration with Partner Organisations (POs) and DSIT. A highly quantitative approach to assessing VfM is not suitable for a fund such as GCRF where many of its intended outcomes are intangible, non-monetisable and not easily quantifiable. The new approach allowed for flexibility by defining a series of value for money performance dimensions and sub-dimensions to cover the main ‘value’ propositions of GCRF.
The rubric-based approach is centred on the 4Es framework developed by FCDO[footnote 2] and tailored to align with GCRF’s intended value proposition. In line with the evaluation’s theory-based approach, the rubric-based approach to VFM maps on to the GCRF Theory of Change (ToC), assessing the value generated at each stage. The rubric includes four dimensions, corresponding to the 4Es. A fifth ‘E’ – Equity – is integrated across dimensions. ‘Economy’ assesses inputs, ‘Efficiency’ assesses how inputs are converted to outputs through award activities and processes, ‘Effectiveness’ assesses outputs and outcomes, and ‘Equity’ is assessed across all stages of the ToC. ‘Cost-effectiveness’ assesses the cost at the input stage to monetary benefits at the output and outcome stages.
Figure 1: Overview of the approach to VfM assessment in GCRF
The 4 dimensions of the GCRF VfM rubric – corresponding to the 4Es – are underpinned by 14 subdimensions (SDs). Rubric subdimensions were developed based on evidence collected over four years of the GCRF evaluation and on a broader understanding of the key factors that drive value in ODA R&I. Rubric subdimensions articulate the value that should be invested or generated within each dimension. These align with GCRF’s value proposition, the value the Fund intended to create, and strategic aims, the overall impact it sought to create. The framework is illustrated in Table 1.
Table 1: Dimensions and subdimensions of the VfM rubric
GCRF VFM rubric |
---|
Dimension 1: Investments in foundations for development impact (Economy) |
SD1.1 Research Innovation/originality |
SD1.2 Investment in interdisciplinary cross-sectoral research in design |
SD1.3 Investment in equality, diversity and inclusion processes (Equity) |
SD1.4 Investment in equitable partnerships and collaborations in design (Equity) |
Dimension 2: Engagement and willingness to invest in outputs (Efficiency) |
SD2.1 Investment in LMIC capacity building (Equity) |
SD2.2 Equitable balance of research funding between UK and LMIC partners (Equity) |
SD 2.3 Investment in strategies to position research for use (e.g. comms) |
Dimension 3: investments to act on results to deliver outcomes (Effectiveness) |
SD3.1 High-quality research and innovation, positioned for use |
SD3.2 Sustainable, equitable partnerships (Equity) |
SD3.3 Enhanced challenge-oriented capabilities[footnote 3] |
SD3.4 User-side stakeholder networks established |
Dimension 4: Compares short-term monetary benefits to costs (Cost-effectiveness) |
SD4.1 Leverage of investment from non-GCRF sources per £1 GCRF |
SD4.2 LMIC Principal Investigators (PIs) secure further research funding, per £1 of GCRF funding (Equity) |
SD4.3 Matched funding achieved by a subset of innovation, market-facing awards per £1 of GCRF funding |
The GCRF VfM rubric establishes tailored performance standards for each rubric subdimension, assessed using a five-point rating scale. Each award is assessed against these standards and rated ‘unacceptable (0)’, ‘poor (1)’, ‘adequate (2)’, ‘good (3)’, ‘excellent (4)’, ‘not applicable’, or ‘insufficient evidence’. Ratings are qualitatively defined for each subdimension, providing a clear guide for award assessment. Overall, unacceptable performance describes awards that have failed to generate value, as defined by GCRF’s value proposition. Poor performance describes awards that have generated slightly less GCRF-relevant value compared to resource invested. Adequate performance describes awards that have generated value, as defined by GCRF’s value proposition, which meets the level of resource invested. Good performance describes awards that have generated more GCRF-relevant value compared to the resource invested. And excellent performance describes awards that have generated substantially more GCRF-relevant value compared to the resource invested. This means that “adequate” awards have done what they were expected to do, “good” awards have done more than expected, and “excellent” awards have done substantially more than expected.
Given the diversity of awards funded through GCRF we classified awards into a typology, facilitating comparison of awards with similar characteristics to enable VfM assessment. Box 1 provides a brief introduction to the award types featured in this report.
Box 1: Overview of GCRF award types featured in this report[footnote 4]:
Thematic research grant programme-funded projects were led by a UK-based principal investigator (PI) in response to a specific thematic call.
Strategic investments funded one-off projects or activities. All such awards within this sample focused on secondary data analysis (i.e. they were desk-based work focusing on analysis of existing data sets).
Applied innovation grants were more applied in nature, involving collaborations with industrial partners to work on later stages of research.
Network awards provided funding to build sustained engagement and collaboration on emerging or challenging research areas. Often these awards also included activities such as workshops, events and communications to establish new relationships.
Early and mid-career awards were research grants directed to researchers in early stages of their careers. Our sample included early career awards from two distinct programmes that should be considered separately:
- Springboard awards provided funding to support early career biomedical scientists based in eligible higher education institutes within the UK.
- The Future Leaders – African Independent Research (FLAIR) programme provided postdoctoral fellowships for African early career researchers at sub-Saharan African institutions. It is distinct from other GCRF programmes in awarding funding directly to African fellows and their host institutions and so was among few GCRF investments that were led by Global South countries.
Findings from the quantitative summative analysis
In this section, we present findings from the summative analysis of 2024 and 2025 VfM assessments covering 81 GCRF awards.[footnote 5]
Evidence from the summative sample suggests that GCRF met, and in some cases exceeded, VfM performance standards, with 89% of awards rated as having adequate, good or excellent performance across Economy, Efficiency and Effectiveness. Performance for each dimension is shown in Figure 2. Insufficient evidence in Cost-effectiveness (not pictured in the figure) precluded dimension-level summary. This indicates that, overall, the sample generated value which met, and in some cases exceeded, the level of resource invested. This provides assurance that the assessed awards deliver VfM.
In assessing cost-effectiveness, we found that total investment from non-GCRF sources was, on average, 4.0 times more than the GCRF investment; however, this figure is sensitive to sampling effects and should be interpreted cautiously. A sensitivity analysis found that our sample secured additional investment between 1.0 and 5.1 times the initial investment in the portfolio, indicating how much this figure varied depending on the sample of awards chosen.
Figure 2: VfM performance of 81 awards across Economy, Efficiency and Effectiveness[footnote 6]
The average performance at the subdimension levels across all awards is adequate (2), except for in investment in equity, diversity and inclusion (EDI) processes (SD1.3) – a subdimension of Economy – where the average score in the summative sample was poor. This relatively consistent performance across rubric subdimensions indicates that, overall, GCRF awards have successfully leveraged inputs and investments and transformed these into activities, outputs and outcomes consistent with GCRF’s aims.
Variation in performance by award type indicates that awards have strengths and weaknesses across VfM subdimensions.
Across the portfolio, we see that research excellence and positioning for use (SD1.1, SD2.3 and SD3.1) are strengths of GCRF awards, while investment in EDI processes (SD1.3) is a weakness. The variation at the award level, as shown in Figure 3, could reflect the differing objectives of different award types. Because research innovation, originality, and positioning for use were key GCRF objectives, strong performance in related subdimensions (SD1.1, SD2.3, and SD3.1) suggests these aims were effectively embedded in awards and contributed to value at the outcome stage. In contrast, weaker performance on EDI indicates a lack of mechanisms at the Fund and commissioning level to embed EDI in project design and delivery.
Figure 3: Average scores of 81 awards across subdimensions of Economy, Efficiency and Effectiveness.[footnote 7]:
A diversified funding portfolio supports fund-wide VfM.
Awards show variable performance across subdimensions while maintaining adequate performance overall. Within the summative sample, network awards perform particularly well, outperforming other award types across most subdimensions. Performance by subdimension and award type is presented in Figure 4.
Having a variety of award types allows different awards to focus on generating a range of GCRF-relevant value and supports a portfolio that, overall, generates value in line with GCRF’s value proposition. This highlights the value of a portfolio approach, ensuring that diverse award types contribute to a balanced set of outcomes that align with the Fund’s strategic aims. It also underscores the importance of aligning funding mechanisms with intended objectives.
Award types demonstrate strengths and weaknesses across VfM subdimensions. Network awards consistently perform better across most VfM subdimensions, while early/mid-career awards tend to score lower, particularly in areas related to Economy and Efficiency. A score of 2 corresponds to an ‘adequate’ rubric rating. Subdimensions are defined in Table 1.
Figure 4: Average scores of 81 awards across Economy, Efficiency and Effectiveness subdimensions by award type.[footnote 8]
Drivers of VfM in GCRF
In this section we present findings from a qualitative analysis from the 2025 VfM assessment of 31 awards, which enabled us to identify drivers of good VfM in GCRF.
Stakeholder engagement strategies and activities enable excellent research that is well positioned for use
Awards with stronger investment in research innovation and originality (SD1.1) often generated research positioned for uptake and wider use by intended audiences and stakeholders (SD3.1). Early stakeholder engagement, interdisciplinarity and cross-sectoral research enabled research relevance. Awards that integrated LMIC expertise in defining challenges and solutions tended to produce outputs that were more applicable and widely used. Awards with dedicated communication plans, engagement strategies and higher funding often performed better in translating research into outputs usable by wider audiences. This underscores the need to provide dedicated resources for an inclusive and equitable approach to setting research agendas through effective stakeholder engagement.
In addition, awards that focus on networking activities, including non-network awards which prioritise stakeholder engagement through co-design, consultation exercises, workshops and other engagement strategies, typically have strong performance across the VfM rubric. Notably, network awards outperform other award types in many subdimensions, underscoring the value generated by an explicit focus on stakeholder engagement. Such awards also appear to offer good VfM at lower funding levels, emphasising their value as a useful complement to other award types within a portfolio approach.
Early investment in equitable partnerships can lead to more sustainable partnerships
Sustainable partnerships (SD3.2) were linked to early investment in equitable collaboration (SD1.4, SD2.2), including co-design, shared decision making and equitable responsibilities across award activities. Awards with strong post-award partnerships also showed these early investments, fostering long-term collaboration. Longer award duration and higher funding did not consistently enable sustainable partnerships, although network awards were an exception where increased funding supported more equitable collaboration.
Larger and longer awards are better placed to improve challenge-oriented capabilities
There is no clear link between investment in LMIC capacity building (SD2.1) and performance in challenge-oriented capabilities (SD3.3). Higher ratings were more common in well-funded, long-duration innovation grants and early and mid-career awards, suggesting that time and resources are key enablers of capacity building. Larger-scale and network-based awards also performed well, benefiting from shared resources, flexible funding and longer project durations. However, some awards with lower capacity-building investment (SD1.2) still performed well in challenge-oriented capabilities (SD3.3), suggesting that strategic partnerships and external funding also play a role.
Investment in equity, diversity and inclusion (EDI) processes can support more equitable practices and sustainable partnerships
While investment in EDI was considered a weakness across the portfolio of awards included in the VfM assessment, we found that awards with clear EDI strategies (SD1.3), equitable funding arrangements (SD2.2) and inclusive decision-making and capacity-building efforts (SD1.4) demonstrated better performance in EDI-related subdimensions. Investment in EDI was correlated with stronger performance across the VfM rubric including positioning for use, challenge-oriented capabilities and stakeholder networks. Higher funding supported better EDI outcomes, and longer projects tended to build more sustainable partnerships. Shorter, lower-funded projects struggled to sustain EDI efforts. Early EDI investment often led to more equitable outcomes; but overall, EDI was not a strong initial focus in most GCRF projects.
Clear structures that enable cross-sectoral collaboration play a role in supporting sustainable partnerships
Investment in interdisciplinarity and cross-sectoral research (SD1.2) alone did not ensure enhanced stakeholder networks (SD3.4). High-performing awards focused on cross-sectoral engagement and structured networking; weaker ones remained limited to academia, with little external collaboration. No clear link emerged between investment in interdisciplinarity and cross-sectoral research (SD1.2) and sustainable partnerships (SD3.2), but high-scoring awards shared strong management structures, cross-sectoral collaboration and proactive engagement, reinforcing the role of structured processes in sustaining partnerships.
Learnings from GCRF VFM assessments to date
Learnings from GCRF VfM assessments are intended to support UK government funders of ODA R&I, as well as wider communities of practice, in evidencing the VfM of research for development investments and in advancing VfM assessment approaches.
Performance in Effectiveness, representing the value generated by the Fund’s outputs and early outcomes, is expected to improve as the Fund matures.
The 2025 sample shows improved performance in Effectiveness compared to Year 4. Differences in the timing of the assessments is one factor, with the 2025 assessment capturing maturing outcomes, aligning with expectations that GCRF awards will generate value over time. Year 2025 also included an enhanced evidence base, with supplemental outcome evidence from evaluation activities. This richer evidence base illuminated the drivers of improved performance in Effectiveness but also highlighted a broader issue: such comprehensive data is unlikely to be captured through existing reporting systems. To systematically assess the Effectiveness of awards, future funds should invest in enhanced data collection systems that provide a more complete and ongoing record of research uptake and outcomes.
In addition to limited data to assess effectiveness, there is little evidence to support the assessment of Cost-effectiveness.
Owing to differences in endline award reporting processes across POs, evidence of follow-on and co-funding was patchy. The main source of information to assess cost-effectiveness was Gateway to Research, a platform where UKRI-funded projects report on performance, including quantitative evidence of follow-on funding at the award level. However, Gateway to Research has several limitations. First, it only covers UKRI awards and is self-reported, typically by UK-based PIs as part of the ResearchFish return, likely limiting data quality and comprehensiveness. Second, the self-reported nature of Gateway to Research also poses some limitations in attributing follow-on funding directly to specific awards. Importantly, because Gateway to Research is a UK-focused system, it was a poor evidence source for follow-on funding secured by LMIC PIs and co-investigators (Co-Is), severely limiting assessment of the extent to which LMIC researchers leveraged GCRF funding to support future research.
Better VfM in ODA R&I funds relies on building a culture of VfM in UK R&I funders.
The findings from this study, and those from similar studies, provide important evidence of present VfM in ODA R&I funds. These studies provide valuable learnings on how greater VfM can be generated and how R&I ecosystems can be better equipped to assess VfM in future funds. Measuring VfM in ODA R&I is complex due to the diverse nature of the intended outcomes and impacts from these investments; the fact that many of these are not easily quantified; and the time lags from the investment to their realisation. The approach used here, which brings together evidence from the evaluation with expert insights to reach a shared understanding and assessment of VfM that doesn’t oversimplify the complexity of the investment and its outcomes, sets a precedent which could be relevant to future ODA R&I investments. Evidence from this GCRF evaluation VfM module indicates that overall, the Fund delivers VfM, but generation of greater VfM in ODA R&I funds depends on bringing VfM insights about the drivers of VfM to commissioning, ensuring that the foundations for generating value are resourced through implementation, and then gathering consistent data on results, outcomes and follow-on investments.
Conclusions and recommendations
Overall, evidence across GCRF VfM assessments provides assurance that the portfolio of awards deliver VfM, with different types of awards contributing different kinds of value to the Fund. While an adequate performance average provides assurance that the Fund has met VfM performance standards, it also indicates scope for future funds to drive value generation that, on average, exceeds investment. We also note scope for improvement – particularly around EDI practices – and we are able to identify key drivers of good VfM, such as a focus on stakeholder engagement. Therefore, as part of the learning function of this assessment, we have identified recommendations for driving VfM in future ODA funds.
The summative analysis showed the value of a strategic portfolio approach to funding to ensuring fund-wide VfM.
Recommendation: DSIT should ensure that future ODA research investments align award types with specific Fund objectives, considering the ways in which these award types may be complementary. For example, network awards for fostering collaboration; innovation grants for translational research; etc. This requires a portfolio analysis and strategy to maintaining a diverse portfolio and address the breadth of the Fund’s strategic aims.
Qualitative analysis highlighted that early and sustained stakeholder engagement is key to translating research into high-quality, relevant outputs and broader outcomes.
Recommendation: To facilitate early stakeholder engagement, in ODA research funds DSIT should consider implementing small grants to ‘spin up’ projects, providing resources at the stage of problem definition. Larger award sizes may also be considered to sustain stakeholder engagement throughout the research life cycle providing resources to employ dedicated engagement specialists, host events and expand research networks.
Improvements in EDI practices such as investment in co-design and equitable balance of research funding would support better VfM performance.
Recommendation: DSIT should ensure that future ODA research funds make EDI strategic priorities clear in funding calls, providing support to award holders in integrating EDI considerations into their work and requiring justification for any funding allocation where less than 50% is directed to LMIC partners.
Enhancing challenge-oriented capabilities requires dedicated but flexible resources and sufficient time to realise benefits.
Recommendation: Where capacity building is a key objective, DSIT should consider longer funding durations for ODA research and dedicated, flexible resources, such as ringfenced funds for capacity building activities, that can be tailored to the emerging needs of the research team and local research ecosystem. Strengthening mechanisms for follow-on funding and institutional support will improve the sustainability of challenge-oriented capabilities in LMICs.
Awards that focus on networking and stakeholder engagement appear to offer particularly good VfM, in line with GCRF’s value proposition.
Recommendation: DSIT should ensure that future ODA research funds consider dedicated networking awards within the funding portfolio as well as allocating additional resources for networking activities and stakeholder engagement activities within other award types.
Current reporting systems do not capture the comprehensive data needed to make a proper VfM assessment
Recommendation: DSIT and POs should consider strengthening systematic data collection efforts and end-of-award reporting to ensure comprehensive tracking of outputs and outcomes of ODA-funded R&I projects. This should include enhanced tracking mechanisms for follow-on funding to support more comprehensive collection of funding secured researchers, including LMIC PIs and Co-Is.
Enhancing VfM in ODA R&I funds requires embedding a culture of VfM within UK R&I funders.
Recommendation: DSIT and ODA R&I funders should aim to develop an understanding of the drivers of VfM in R&I funding in general as well as of the specific drivers for each programme based on its value proposition. This would contribute to acknowledging that VfM extends beyond the monetisable outcomes of a programme.
-
Standalone findings from the 2024 VfM assessment are available from: DSIT. ‘Global Challenges Research Fund: value for money assessment’ 2025 (viewed on 22 May 2025) ↩
-
DFID. ‘DFID’s approach to Value for Money (VfM)’ GOV.UK 2011 (viewed on 14 February 2025) ↩
-
Challenge-oriented capabilities: the ability to design, implement and manage research and innovation projects focused on addressing real-world challenges. ↩
-
Academy of Medical Sciences. ‘Springboard’ 2024 (viewed on 14 February 2025); The Royal Society. ‘FLAIR Fellowships’ 2024 (viewed on 14 February 2025) ↩
-
Adaptions in the rubric across VfM assessments limit the summative assessments to Years 4 and 5. ↩
-
Subdimensions of Equity are integrated throughout Economy, Efficiency and Effectiveness (see Table 1) and therefore are not analysed as a standalone dimension of the rubric. Overall performance in Cost-effectiveness could not be determined due to limited evidence. ↩
-
0-1 = Poor; 1-2 = Adequate; 2-3 = Good; 3-4 = Excellent ↩
-
0-1 = Poor; 1-2 = Adequate; 2-3 = Good; 3-4 = Excellent ↩