A year in metascience (2025): executive summary
Published 30 June 2025
Executive summary
We are the UK Metascience Unit. We are a small team of policymakers, analysts, and funding delivery specialists spanning the central government Department of Science, Innovation and Technology (DSIT), and the UK’s largest public R&D funding body, UK Research and Innovation (UKRI).
Our work starts from a simple idea: that the scientific method should be applied to the systems, policies and processes of science itself, so that we can improve them.
We aspire to conduct bold experiments on UKRI’s processes; cultivate a thriving UK metascience research community; advance global metascience research in key areas through large, coordinated multi-actor research projects; and contribute to an analytically enriched UKRI and DSIT.
In our first year of existence, we have set many things in motion:
-
We are funding 23 UK-led international collaborative projects on a broad range of metascience topics through a £5 million research grant call, co-funded with Open Philanthropy.
-
We are funding 18 early career fellows to look at how AI is changing science, alongside a parallel international cohort funded by the Alfred P. Sloan Foundation and the Social Sciences and Humanities Research Council of Canada.
-
We opened up UKRI award data to external researchers at scale for the first time, funding 5 multi-university collaborations, co-designed with UKRI, that are aimed at learning more about our funding system and how UKRI can better target its portfolio.
-
We are running a global competition to find and validate new AI-driven indicators of scientific novelty.
-
We are running a major study observing the cohort of UKRI’s Cross-Council Responsive Mode scheme, to understand what works (and what doesn’t) in interdisciplinary research.
We already have some powerful insights to share from our experiments this year focussed on improving the performance of the peer review system.
-
We successfully trialled ‘Distributed Peer Review’ – a radical new way of running peer review assessment whereby applicants review each other. We found that it shortened the assessment and decision-making process by ~3 months, 53- 65% faster than typical UKRI calls, and significantly reduced burden on UKRI staff. 84% of applicants surveyed agreed that the process expanded their knowledge of the field while 88% said the process improved their grant writing skills.
-
The unique dataset created by running distributed peer review gave a rare opportunity to study reviewer consistency. We found that adding reviewers significantly increases the consistency of scoring. Using an Intraclass Correlation Coefficient measure (0-1 range where higher values indicate greater agreement), we saw a move from 0.4 at 3 reviewers to 0.7 at 9 reviewers.
-
We ran simulations and trials of ‘Partial Randomisation’ – a tweak to the grant peer review process whereby ‘fundable’ applications near the budget cut off are subject to lottery. We argue that the current weight of evidence is insufficient to suggest that PR is a highly impactful and effective tool in funding policy.
-
We ran an internal study looking at whether reviewers can be convinced to participate in grant assessment processes using motivational nudges.