Rapid evidence assessment of valuation methods for civil society
Published 11 July 2025
Executive Summary
This rapid evidence assessment (REA) reviewed 56 studies that applied various valuation techniques to assess civil society interventions across multiple subsectors, with the most common methodologies including social return on investment (SROI) and cost-benefit analysis (CBA), followed by cost analysis, contingent valuation, wellbeing valuation, QALY and travel cost. These methods were applied to various subsectors of the civil society, including adult learning, advice services, domestic violence, employment and training, faith, food support, fuel poverty, housing and homelessness, health, loneliness, nature, sport, innovation, youth services and youth work.
The REA found that estimated values for services, activities, outputs, and outcomes in civil society varied significantly across subsectors and methodologies. CBA and SROI studies produced diverse benefit estimates, while wellbeing and contingent valuation studies provided insights into individuals’ willingness to pay for services. Despite their usefulness, these methods faced limitations, including data constraints, reliance on assumptions, and challenges in capturing long-term effects. The findings underscore the importance of selecting appropriate valuation methodologies and ensuring robust data collection to improve accuracy and comparability. While many studies demonstrated methodological transparency, some lacked replicability due to undisclosed financial proxies or insufficient data documentation. Robustness varied across subsectors, with housing, homelessness, employment, and health interventions often featuring sensitivity analyses to test assumptions, emphasising the need for methodological rigour to ensure that valuation estimates can reliably inform policy and funding decisions.
The research team also examined the methodologies used to understand the drivers behind estimated values. CBA and SROI studies typically followed a four-step process: mapping outcomes based on stakeholder input, identifying appropriate indicators, surveying participants to establish impact, and monetising outcomes using financial proxies. However, these studies often focused on benefits that were easier to quantify, such as avoided costs in public services, and sometimes omitted benefits lacking robust evidence. While many studies used a top-down approach to estimate costs based on population characteristics, a bottom-up approach was more common for specific service areas like housing support. Other methodologies, such as wellbeing valuation and contingent valuation, relied on statistical analysis of longitudinal datasets or user willingness-to-pay surveys but often lacked detailed exploration of the factors influencing values. Some studies did not disclose financial proxies, raising concerns about replicability.
The quality of studies applying valuation methodologies varied, with 41% of studies reviewed by the research team classified as high quality, 38% as medium quality, and 21% as low quality. SROI studies were generally of higher quality, with differentiation mainly based on the use of ex-ante and ex-post observations and sensitivity analysis. Many studies lacked comparator groups, particularly those with retrospective commissioning. CBA studies often fell short of best practices, with gaps in discounting, distributional analysis, and sensitivity testing. Higher-quality contingent valuation studies clearly stated regression models, conducted pre-experiment testing, and included sensitivity analysis. Sample size was not formally assessed as a quality criterion, but four studies with fewer than 100 respondents raised concerns about representativeness.
Valuation techniques in civil society face several limitations. CBA and SROI struggle to accurately capture non-market benefits, such as wellbeing and social cohesion, often relying on financial proxies that may not fully reflect their true value. While CBA uses standardised economic valuations, SROI is more stakeholder-driven, making cross-project comparisons challenging. Both methods also rely on avoided cost methodologies, which can underestimate societal benefits. Contingent valuation and wellbeing valuation, while effective in capturing user preferences and non-market benefits, are prone to biases. Contingent valuation is susceptible to hypothetical, anchoring, and information biases, while wellbeing valuation faces issues of scaling, salience, and selection bias. Travel cost methodology faces challenges when applied to the civil society sector due to data collection difficulties and recall bias. Each technique has inherent trade-offs, and its applicability depends on the specific context and benefits being assessed.
The most appropriate valuation technique depends on the specific service, available resources, and intended use of results. CBA and SROI are suitable when project resources are limited or when a strong evidence base allows for reliable benefit transfers. SROI is particularly useful for CSOs focusing on wellbeing and social outcomes, while wellbeing valuation is best used alongside CBA or SROI for measuring personal impacts. Primary research methods, such as revealed and stated preference techniques, are preferable when existing evidence is insufficient, and additionality is difficult to establish. While revealed preference methods are theoretically superior, they are rarely feasible in the civil society sector due to data constraints. Stated preference methods like contingent valuation offer flexibility in capturing user valuations. Travel cost methodology is most appropriate for services provided for free, where direct willingness-to-pay surveys may generate high protest responses. The selection of a valuation technique should also consider comparability with previous research to ensure the findings remain useful for decision-makers.
This report also presents three case studies, each outlining a proposed approach for calculating value estimates for three distinct civil society services: food banks, job training provision, and financial advice services. For food banks, the travel cost method is recommended, valuing societal impact based on user behaviour, such as travel distance and costs. The job training provision case study proposes CBA to quantify economic and social benefits, including wage progression, employment rates, and public spending savings. Lastly, for advice services, social CBA is suggested to assess the broader social and economic benefits of financial and debt advice services, such as improved mental health and reduced reliance on welfare. Each case study highlights specific challenges and proposes methods to enhance the societal value of these services.
Background and Approach
Introduction
The Department for Culture, Media and Sport (DCMS) commissioned research to enhance understanding of valuation methods for services and activities delivered by civil society organisations (CSOs). The project involved conducting a Rapid Evidence Assessment (REA) to critically appraise existing studies and develop recommendations for future valuation work aimed at developing or improving monetary outputs of civil society services, outputs or outcomes. The findings from the REA, presented in this report, aim to inform more accurate and consistent valuations, thereby strengthening the economic appraisal of CSO contributions to social and economic outcomes.
Rationale and objectives
The research team had two main objectives: (i) to identify, collate, and critically appraise valuation methodologies applied to the services, activities, and outcomes delivered by CSOs, and (ii) to provide actionable recommendations for developing or improving monetary estimates for a subset of priority CSO services or outcomes. To meet these objectives, the REA synthesised the literature on valuation methods such as contingent valuation, wellbeing valuation and hedonic pricing, among others, to understand their applicability and limitations within the civil society context.
The REA addresses a critical gap in the evidence base surrounding the valuation of civil society activities, many of which deliver substantial benefits that are not fully captured through traditional economic measures. Unlike market-driven sectors, the value of services provided by CSOs, ranging from homelessness support to mental health interventions, is often difficult to monetise due to their subsidised or free nature. Consequently, the true economic and social contributions of these organisations risk being undervalued, leading to suboptimal decisions in funding and resource allocation. By improving the accuracy and consistency of valuation estimates, the findings from the REA will help ensure that the unique contributions of civil society are better recognised and reflected in decision-making processes, enabling more effective investment in this vital sector.
Methods
The research team selected a rapid evidence assessment as the methodology for the evidence review to maximise the relevance of findings under the agreed time and resource constraints. The search strategy followed, which spanned both academic and grey literature, allowed the research team to prioritise research from a variety of sources, using a transparent and well-defined protocol and search strategy. Appendix 3 shows the PRISMA diagram, which outlines the study selection process, including identification, screening, and inclusion of the studies in the review.
This section sets out the research questions, search strategy and inclusion/exclusion criteria that was used to decide whether retrieved studies fit into the evidence review. The REA search protocol, detailed in Appendix 1, was used to develop a “longlist” of relevant studies, which were then subject to a review of titles and abstracts to be screened based on the inclusion and exclusion criteria. The screening process was used to create a “shortlist” of studies read in detail, with relevant information from each study recorded in a Research Extraction Sheet (RES). The information recorded was then synthesised, producing a summary of the evidence related to the research questions.
Research questions
This research aimed to answer the following questions:
-
What different valuation techniques have been applied to services, activities, outputs, and outcomes typically delivered through civil society?
-
What are the values estimated for these services, activities, outputs and outcomes? How robust are these values?
-
How have the studies attempted to understand what is driving these values?
-
What is the quality of individual studies that have developed/applied these techniques?
-
What are the limitations of the different types of valuation techniques applied to the civil society sector?
-
Within this context, are there certain circumstances that make a valuation technique preferable over others? If this is the case, what are these circumstances, and what is the preferred technique?
-
What are the options for calculating and calibrating estimates of value for a selection of services or outcome case studies? What (primary or secondary) data will be required to undertake this analysis? What are the opportunities, challenges and limitations of these approaches?
Detailed Findings
The following section describes the key insights from the reviewed literature, organised by research question.
1. What different valuation techniques have been applied to services, activities, outputs, and outcomes typically delivered through civil society?
The rapid evidence assessment (REA) included 56 studies that implemented a range of valuation methodologies to measure outcomes across different subsectors, programmes, organisations, and interventions within civil society. A breakdown of these studies by methodology and civil society subsector is presented in Tables 1 and 2.
Table 1: Studies included in the REA by methodology[footnote 1]
Methodology | Number of studies |
---|---|
Social return on investment | 23 |
Cost-benefit analysis | 18 |
Cost analysis | 5 |
Contingent valuation | 4 |
Wellbeing valuation | 4 |
QALY | 1 |
Travel cost | 1 |
Table 2: Studies included in the REA by civil society subsector
Civil society subsector | Number of studies |
---|---|
Physical and mental health | 10 |
Housing and homelessness support | 9 |
Sport and recreation | 9 |
Employment and training | 6 |
Domestic violence | 4 |
Youth work | 3 |
Food support | 3 |
Advice services | 2 |
Loneliness | 2 |
Fuel poverty | 2 |
Other | 6 |
Social return on investment (SROI)
SROI is a framework used to measure and evaluate the economic, social and environmental value created by an activity, intervention or organisation (Davies et al. 2018). Widely used across the UK, SROI translates social outcomes into monetary terms, allowing organisations and governments to understand and quantify the impact of their activities, which supports informed decision-making and more effective spending (Envoy Partnership 2014). This valuation methodology considers factors such as additionality, deadweight, and displacement to provide a comprehensive and accurate assessment of social value, making it a valuable tool for improving resource allocation and driving positive social change (YouthLink Scotland 2016; Baraki and Lupton-Paez 2021; Hartfiel et al. 2023).
In the REA, a total of 23 studies applied SROI to measure the net impacts of interventions, organisations and broader civil society sectors. Five of these focused on sport and recreation:
- Baker et al. (2017) evaluated the Active Together programme, which aimed to increase participation in sport and physical activity across Gloucestershire County Council.
- Davies et al. (2018) measured the social impact of sport and physical activity in England for the period 2017-2018.
- Davies et al. (2021) assessed the impact of twelve community sports and leisure centres in Sheffield.
- Lawlor and Neitzert (2021) conducted the evaluation of the Healthy Club programme, which sought to promote health and wellbeing in Ireland.
- Koning et al. (2022) measured the value of sport and physical activity in the Netherlands for the year 2020.
Five studies focused on physical and mental health:
- Arvidson et al. (2014) examined the net benefits generated by the Community Befriending programme, which provides services to families affected by post-natal depression in Birmingham.
- Goodspeed (2014) evaluated Turning Point’s Substance Misuse Services in Wakefield, which supports individuals recovering from addictions.
- Envoy Partnership (2014) assessed the benefits of the Community Champions programme, a volunteer network promoting health and wellbeing practices within local communities in London.
- Bertotti and Temirov (2020) analysed the City and Hackney Social Prescribing scheme, in which healthcare professionals in London refer patients to social prescribing link workers to assess non-medical needs and connect them with community-based services.
- Hartfiel et al. (2023) assessed the impact of nature-based interventions on adults with mental health challenges in Wales.
Four studies focused on employment and training:
- Goodspeed (2009) forecasted the social return of Workwise activities, a social enterprise in Suffolk providing employment training for adults with long-term mental health conditions.
- Walk et al. (2015) evaluated a job and skills training programme in Toronto, Canada, targeted at an unemployed, predominantly female population.
- Lawlor et al. (2019) focused on analysing the Tinder Foundation’s Future Digital Inclusion Programme, which promotes digital inclusion across the UK.
- Hatch Regeneris (2020) evaluated the WeMindTheGap programme, which supports young people aged 18–30 in North Wales and the North West with a holistic initiative combining work experience, skills training, new experiences, and mental and emotional support.
Two studies focused on domestic violence support services:
- Baraki and Lupton-Paez (2021) assessed the value generated by Refuge, a UK charity providing services for survivors of domestic abuse, with a primary focus on the impact achieved through survivor interventions.
- Dowrick et al. (2022) evaluated the IRIS programme by measuring the value of investing in primary health services in five UK sites as part of a pathway for identifying and supporting patients affected by abuse.
The remaining studies focused on other subsectors, including youth work, advice services, food support and fuel poverty.
- Youth work – YouthLink Scotland (2016) analysed the social and economic value of the youth work sector across Scotland, examining financial savings and social benefits.
- Advice services – Granger et al. (2024) evaluated the Citizens Advice on Prescription (CAP), a Liverpool-based service which provides welfare advice and link worker social prescribing support for individuals experiencing or at risk of financial and social hardship.
- Food support – Courtney (2014) assessed the Local Food programme, a UK-wide funding programme which distributes Big Lottery Fund grants to food-related projects aimed at increasing access to locally grown, affordable food.
- Food support – Roussos et al. (2024) assessed the impact of the first two community supermarkets in Essex.
- Community – Courtney (2018) examined the impact of four social purpose organisations (SPOs) in Gloucestershire, focusing on the Gloucester City Centre Community Partnership’s Fielding and Platt project, which aimed to bring together former employees of the Fielding and Platt foundries and their families.
- Fuel poverty – Nolden et al. (2021) captured the value of South East London Community Energy (SELCE), a non-profit social enterprise providing fuel poverty alleviation services and community-owned renewable energy schemes.
- Fuel poverty – Oxford Economics (2024) evaluated the operations of the British Gas Energy Trust, a charitable trust funded by British Gas that aims to combat fuel poverty through direct support to households and organisational grants to charitable groups.
Cost-benefit analysis (CBA)
Cost-benefit analysis (CBA) is a methodology used to compare the costs and benefits of a project or intervention over time, aggregating and discounting them to present value terms to determine whether a programme or project generates a net return (Indecon 2012). Costs are often measured directly (based on analysis of expenditures), while benefits can be measured through equivalent market prices, generic prices, revealed preference, willingness to pay, willingness to accept, wellbeing or estimation of a central reference value and range (HM Treasury 2022).
By quantifying both costs and benefits in monetary terms, CBA enables the assessment of whether the benefits of a specific policy or programme outweigh the costs, thus supporting decision-making and evaluating the overall value for money. If the CBA methodology is extended to cover the full spectrum of costs and benefits to welfare, including social and environmental impacts, this is known as social cost-benefit analysis. Social CBA is commonly applied in the civil society sector, providing a systematic approach for making informed, cost-effective decisions in these sectors (HM Treasury 2022).
A total of 18 studies across various civil society subsectors applied cost-benefit analysis as their primary evaluation method. Of these, seven focused on housing and homelessness support services:
- Oxera (2013) assessed the impact of Centrepoint, a charity supporting young people (between 16 and 25) facing homelessness through housing assistance and tailored support services in London and the North East.
- Bretherton and Pleace (2015) assessed the effectiveness of the Housing First intervention in reducing homelessness across nine services in England.
- PwC (2018) analysed the costs and benefits of Crisis’ Plan to End Homelessness, which set out recommended solutions to address and prevent homelessness in Great Britain.
- Standing Together and Solace Women’s Aid (2022) conducted a second-year evaluation of the Westminster Violence Against Women and Girls (VAWG) Housing First project, which provides independent housing for women who have experienced long-term homelessness and violence.
- Johnsen et al. (2023) assessed the Housing First Pathfinder programme in Scotland, designed to support individuals with complex needs and histories of homelessness.
- MHCLG (2024) evaluated the three Housing First pilot programmes in Greater Manchester, Liverpool, and the West Midlands, running from 2018 to 2023.
- Pleace and Culhane (2016) explored the human and financial costs of homelessness and how enhancing preventative measures can lead to decreased public expenditure.
Two studies focused on youth work:
- Indecon (2012) conducted an economic assessment of the youth work sector in Ireland.
- Frontier Economics and UK Youth (2022) estimated the economic value of youth work in England.
Two studies focused on mental health:
- McDaid et al. (2017) analysed eight interventions with evidence of reducing the risk and incidence of mental health problems across different age groups and promoting overall mental wellbeing.
- Knapp et al. (2011) analysed fifteen interventions across schools, workplaces, and healthcare settings aimed at promoting mental health.
Other studies included:
- Domestic violence prevention – Rogers et al. (2018) evaluated the Change Up programme in Salford, which focused on early prevention for young people associated with, involved in, or at risk of domestic violence and abuse.
- Employment and training – Gasper et al. (2020) assessed the economic value of investing in job training services in New York City.
- Sport – Ecorys (2017) evaluated GoodGym, a community-based volunteering and physical activity programme where runners provide social support for isolated older adults and assist with manual labour for community projects.
- Youth services – Eisenberg and Hutton (2016) estimated the economic value of services provided by Boys and Girls Clubs, a national non-profit organisation based in the US focusing on education, health, and citizenship.
- Faith sector – Cnaan (2009) estimated the value of economic contributions of urban American local religious congregations in Philadelphia and Wilmington.
- Advice services – Europe Economics (2018) assessed the economic impact of the personal debt advice sector in the UK.
One study spanned multiple civil society subsectors:
- Pritchard et al. (2021) evaluated the Coronavirus Community Support Fund (CCSF), which provided emergency funding to voluntary, community, and social enterprise (VCSE) organisations during the COVID-19 pandemic, assessing its cost-effectiveness in supporting vulnerable groups, sustaining community services, and retaining staff and volunteers.
Cost analysis
Cost analysis is a methodology focused on identifying, measuring, and evaluating all costs (i.e. direct, indirect and opportunity costs) associated with social issues. The estimated costs can then be compared to other cost estimates, budgets, financial reports, or potential savings. The studies referenced below provide valuable insights into the financial burden of societal issues and help justify investment in preventative interventions by quantifying potential cost savings.
This method differs from cost-benefit analysis and social return on investment as it does not compare intervention costs with a direct valuation of benefits, making it particularly useful when the benefits of a specific intervention are hard to measure (Drummond et al. 2015). Cost analysis often captures only a portion of the full economic impact. Although estimating the costs of a social problem can inform decision-making, this does not necessarily mean that addressing those costs fully represents the benefits of an intervention. Benefits extend beyond costs averted, encompassing wider social and economic gains, which are considered in SROI and CBA but not in cost analysis. For instance, in the Housing First Scotland programme, cost reductions were observed, but these were only a small component of the broader benefits generated by the intervention (Johnsen, Blenkinsopp, and Rayment 2023).
Furthermore, cost analysis is distinct from CBA because the latter classifies avoided costs (or cost savings to the exchequer) as benefits, which are not the same as total costs estimated in a cost analysis. Therefore, cost analysis estimates should not be interpreted as equivalent to the benefits estimated in a CBA. Additionally, CBA applies a structured methodology to account for deadweight, displacement, and attribution, ensuring that benefits are appropriately adjusted for what would have occurred without the intervention. Cost analysis, by contrast, does not typically incorporate these adjustments, making it less suited for assessing the net impact of an intervention. This distinction is crucial for policymakers and stakeholders to ensure that cost estimates are not misinterpreted as full economic valuations of programme effectiveness.
A total of five studies from the REA applied this methodology for economic valuation. Two studies focused on physical and mental health:
- The Sainsbury Centre for Mental Health (2007) assessed the financial burden of mental health issues on UK employers.
- Cardoso and McHayle (2022) estimated the economic and social costs of mental ill health in England.
This methodology was also applied in other civil society subsectors:
- Domestic abuse – Oliver et al. (2019) calculated the societal cost of domestic abuse in England and Wales.
- Loneliness – New Economics Foundation, The Co-op, and Jo Cox National Commission on Loneliness (2017) estimated the economic impact of loneliness on UK employers.
- Sport and recreation – CEBR (2014) assessed the financial cost of physical inactivity among young people in the UK.
Stated and revealed preference methods
As an alternative to the three synthesis methods discussed above (SROI, CBA and cost analysis), stated and revealed preference techniques can be used to directly estimate the value of non-market goods and services that lack explicit market prices (HM Treasury 2022).
Stated preference methods rely on survey-based techniques where individuals express their willingness to pay for a good or service (or willingness to accept a loss) in hypothetical scenarios and are widely used within the environmental, culture, and heritage economic literatures. Examples of stated preference techniques include contingent valuation and choice modelling. The advantage of stated preference methods is their ability to estimate values for goods and services that do not have market prices attached or are difficult to observe in markets.
Revealed preference methods infer values based on actual behaviour and choices observed in real markets (i.e., analysing real market prices or exchanges). Examples of revealed preference techniques include travel cost and hedonic pricing. One advantage of revealed preference methods is their reliance on real-world data, making them less prone to hypothetical biases: for example, individuals may overestimate how much they value a specific service or outcome, making it difficult to accurately compare the impact of different policy options (Fifer, Rose and Greaves 2014). As a result, the Green Book specifically recommends that stated preference methodologies should be used only in the absence of robust revealed preference data (HM Treasury 2022).
Contingent valuation is a survey-based, stated preference method used to estimate the value of non-market goods or services. It focuses on asking individuals how much they would be willing to pay (WTP) for a specific outcome or willingness to accept (WTA) the loss of the outcome, providing insight into the value they place on goods that do not have a market price (Champ, Boyle, and Brown 2017; Crisp and Dayson 2011). Contingent valuation is particularly useful when no market data are available, as it helps assess decision utility by asking hypothetical questions about individuals’ preferences (Dolan and Fujiwara 2012).
A total of four studies from the REA applied this methodology for economic valuation.
- Sport and recreation – Frew et al. (2014) conducted a cost-effectiveness study of BeActive, a community-based physical activity programme for adults in Birmingham, using contingent valuation to quantify all potential benefits (both health and non-health related) of the programme.
- Adult learning – Dolan and Fujiwara (2012) used contingent valuation to assess the benefits of adult learning in the UK.
- Housing and homelessness – Loubière et al. (2020) measured willingness to pay for ending homelessness via the Housing First model, using a telephone survey in eight European countries.
- Multiple civil society subsectors –** Crisp and Dayson (2011)** evaluated the impact of the Middlesbrough Voluntary Development Agency in supporting the voluntary and community sector through engagement and volunteering initiatives.
Travel cost is a methodology used to estimate the economic value of goods that require travel to a site. It operates on the assumption that the time and money individuals spend travelling to a site reflect their willingness to pay for access, serving as a proxy for the value they assign to it. This willingness to pay is revealed through the number of trips made and the sites chosen (Champ, Boyle, and Brown 2017). While primarily used to value environmental goods and recreational sites, travel cost methodology is now also applied to assess outcomes in civil society.
In the REA, one study utilised the travel cost methodology: Byrne and Just (2023) estimated the welfare effects of food pantry services provided by a private, non-profit organisation in Colorado, United States.
Wellbeing valuation approach
Wellbeing valuation is a method used to estimate the economic value of policies, interventions, organisations, or sectors of society by measuring their impact on individual wellbeing (HM Treasury 2021). A methodology that has become increasingly used in policy appraisal, particularly in areas such as community cohesion, children and families, wellbeing valuation is based on the assumption that subjective wellbeing can be influenced by factors such as health, income, and education (Dolan and Fujiwara 2012). Subjective wellbeing is typically measured using surveys, where individuals provide responses regarding their level of wellbeing, often in terms of life satisfaction and happiness. The underlying principle is that changes in wellbeing, such as improvements in health or reductions in poverty, can be monetised by determining how much people are willing to pay or accept for those changes, based on their effect on reported wellbeing.
In the REA, four studies applied various wellbeing valuation techniques to estimate the economic value of different interventions and programmes:
- Employment and training – Fujiwara (2013) applied the Three-Stage Wellbeing Valuation (3S-WV) to value unemployment in the UK.
- Housing and homelessness – Fujiwara and Vine (2015) evaluated the impact of tackling homelessness by analysing how changes in housing status affect life satisfaction and the role of support services in improving housing stability, drawing on the Journeys Home survey collected by the University of Melbourne (Australia).
- Loneliness – Peytrignet et al. (2020) estimated the annual cost of loneliness per individual using wellbeing valuation, assessing its impact on subjective wellbeing, health, and productivity.
- Sport and recreation – Orlowski and Wicker (2018) used wellbeing valuation to estimate the monetary value of healthy behaviours, such as participation in sport and physical activity, in Germany.
QALY
A quality-adjusted life year (QALY) represents a full year of perfect health and is used to assess how an intervention, treatment, or programme improves a person’s health and wellbeing. Initially developed to combine the length and quality of life into a single index (Prieto and Sacristán, 2003), QALYs alone cannot determine the economic value of an intervention. Instead, they are a key component of cost-effectiveness analysis, where the cost of an intervention is compared to the QALYs gained (NHS Scotland 1998).
Relatedly, a disability-adjusted life year (DALY) represents a year of healthy life lost due to illness or disability. It measures the overall disease burden by combining years of life lost with years lived with disability (World Health Organization, 2020) and is used to assess how diseases and medical conditions affect both lifespan and quality of life. In this context, interventions are evaluated based on DALYs averted.
One study in the REA applied DALYs as the only valuation methodology: Nguyen et al. (2024) conducted an economic evaluation of the Peninsula Dental Social Enterprise, a community dental care model in Plymouth providing routine and urgent care services for people experiencing homelessness.
2. What are the values estimated for these services, activities, outputs and outcomes? How robust are these values?
This section presents the estimated values across studies included in the REA for each civil society subsector. Because study quality is discussed in more detail in the section covering Research Question 4, this question takes a narrower approach to defining robustness. Estimated values are defined as robust if two criteria are met:
- The underlying study conducts additional validation or sensitivity analyses to strengthen the case for the estimated value (and ensure the estimate is not dependent on strict assumptions).
- The underlying study is transparent, clearly setting out the methodology and data sources in a way that would enable replication by future researchers. Note that this does not refer to whether the underlying data sources are publicly available (only that the methodology would be replicable if researchers had access to the input datasets).
Values from cost analysis studies are presented in a separate section, as these studies are not directly comparable with other studies in the REA.
Overall, just over half of the studies in the REA (29 of 56) met both robustness criteria. 19 studies met one of the two robustness criteria (primarily the second criterion around replicability), and 8 studies did not meet either criterion.
The tables below present the costs and benefits estimated in each paper, along with the duration of the top-level benefit estimate. While most studies in the REA considered benefits that extended beyond the duration of the CSO-provided programme or service, the specific duration of benefits was calculated differently in each study, with varying timeframes applied to different benefits. Additionally, if benefits were valid for only one year, no discounting was applied. Where benefits were calculated over multiple years, but no discount rate is listed, this is because it was not mentioned in the study. Similarly, if the table does not include a price year, it is because none was explicitly stated, and it is reasonable to assume the price year corresponds to the publication year.
Housing and homelessness
Table 3 presents per-person estimates of costs and benefits reported in studies focused on housing and homelessness. Table 4 presents per-person estimates of costs (under a Business as Usual scenario) and cost savings for studies that did not consider a broader range of benefits. All costs presented below represent average annual costs.
Table 3: Cost and benefits reported for CBA studies focused on housing and homelessness
Study | Country | Cost | Benefit | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
Oxera (2013) | UK | £14,241 (per client) | £34,165 (per client) | 15 years Discount rate: 3.5% 2010/2011 prices |
Yes | Yes |
Fujiwara and Vine (2015) | Australia | Not reported | £8,019-£16,448 (per person) | Not stated | No | Yes |
PwC (2018) | UK | £34,460 (per person) | £96,488 (per person) | Benefits accrue for the duration of the intervention Discount rate: 3.5% 2017 prices |
Yes | Yes |
Standing Together and Solace Women’s Aid (2022) | UK | £9,625 (per client) | £56,127 (per client) | 1 year | No | No |
MHCLG (2024) | UK | £10,915 (per person housed) | £26,689 (per person on the programme) | 1 year 2022 prices |
Yes | Yes |
Table 4: Costs and cost savings reported for CBA studies focused on housing and homelessness[footnote 2][footnote 3]
Study | Country | Cost (BAU) | Cost savings | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
Bretherton and Pleace (2015) | UK | £24,612 (per client) | £17,702-£19,886 (per client) |
1 year | Yes | Yes |
Pleace and Culhane (2016) | England | £34,518 (per person) | £9,266 (per person) |
1 year | No | Yes |
Johnsen et al. (2023) | Scotland | £5,632 (per client) |
£2,454 (per client) |
1 year | No | Yes |
In addition, Loubière et al. (2020) reported an average WTP of €33-93 (in annual higher taxes) to implement the Housing First model. This study tested several different specifications for the multivariate analysis and has a clearly-stated methodology, and therefore meets both criteria in the research team’s framework for robustness.
Sport and recreation
Table 5 presents per-person estimates of costs and benefits reported in studies focused on sport and recreation programmes. Table 6 presents valuation estimates from studies examining the entire sector.
Table 5: Cost and benefits reported for CBA and SROI studies focused on sport and recreation programmes
Study | Country | Cost | Benefit | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
Ecorys (2017) | UK | £25,000 (cumulative cost) | £69,546 (cumulative benefit = 2 QALYs) | Lifetime | Yes | Yes |
Frew et al. (2014) | UK | £75 (per participant) | £240 (average max ex-post WTP) | 1 year Discount rate: 3.5% 2009/2010 prices |
Yes | Yes |
Baker et al. (2017) | UK | £2.3 million (total investment) | £16.7 million (total benefit) | Over 1 year Discount rate: 3.5% |
Yes | Yes |
Lawlor and Neitzert (2021) | Ireland | €2.4 million (total input cost) | €50 million (total programme value) | 1 year | Yes | Yes |
Davies et al. (2021) | UK | £18.0 million | £21.7 million | Lifetime | Yes | No |
Table 6: Valuation estimates for overall sport and recreation sector
Study | Country | Cost | Benefit | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
Koning et al. (2022) | Netherlands | €9.6 billion | €25.8 billion | 1 year Discount rate: 2.25% |
Yes | Yes |
Davies et al. (2018) | England | £21.9 billion | £71.6 billion | 1 year | Yes | No |
Both Davies et al. (2018) and Davies et al. (2021) have “No” assigned for the replicability criterion as they do not present data sources for their financial proxies (therefore their specific methodology can not be replicated by other researchers). However, these two studies likely use similar sources as Davies, Ramchandani and Kung (2024) and Fawcett et al. (2024): all four studies are published by Sport England and draw on data from the Active Lives Adults Survey.
In addition, Orlowski and Wicker (2018) reported that individuals are willing to forgo €185-840 (monthly net income) to participate in sports and physical activity several times per year. This study tested several different model specifications (generalised ordered probit and two-stage residual inclusion with different sets of control variables) and has a clearly-stated methodology, and therefore meets both criteria in the research team’s framework for robustness.
Physical and mental health
Table 7 below presents per-person estimates of costs and benefits reported in studies focused on programmes supporting physical and mental health.
Table 7: Cost and benefits reported for CBA and SROI studies focused on programmes supporting physical and mental health
Study | Country | Programme | Cost | Benefit | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|---|
Envoy Partnership (2014) | UK | Community Champions | £550,000 (total investment) | £2.8 million | 3 years Discount rate: 3.5% |
Yes | Yes |
Bertotti and Temirov (2020) | UK | Social prescribing | £23,100 (total delivery cost) | £81,133 | 1 year | Yes | Yes |
Goodspeed (2014) | UK | Substance misuse | £3.4 million (total treatment cost) | £29.9 million | 1 year | Yes | Yes |
Nguyen et al. (2024) | UK | Community dental care | £57,118 | £163,910 (5.4 DALYs averted) | 18 months 2020 prices |
Yes | Yes |
Arvidson et al. (2014) | UK | Community befriending | £71,000 (funding costs) | £461,500 | 30 years | No | No |
Hartfiel et al. (2023) | Wales | Nature- based activities |
£255-£821 (per participant, across 3 programmes) | £1,998 (per participant) | Not stated | No | No |
Employment and training
Table 8 presents programme-wide estimates of costs and benefits reported in studies focused on programmes providing employment support and job training.
Table 8: Cost and benefits reported for CBA and SROI studies focused on programmes providing employment support and job training
Study | Country | Cost | Benefit | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
Lawlor et al. (2019) | UK | £3.5 million | £17.5 million | 6 months – 2 years Discount rate: 3.5% 2014/2015 price |
Yes | Yes |
Gasper et al. (2020) | US | $17 million | $340 million | 5 years – 10 years Discount rate: 1.5% 2014/2015 prices |
Yes | Yes |
Goodspeed (2009) | UK | £490,456 | £1.5 million | 5 years Discount rate: 3.5% |
Yes | Yes |
Walk et al. (2015) | Canada | $5,004 (per client per year) | $10,384 (per client per year) | 1 year Discount rate: 3.5% |
Yes | Yes |
Gasper et al. (2020) have “Yes” assigned for the replicability criterion as the study clearly sets out its methodology, though full replicability of findings is not possible as the study relies on administrative data provided by the New York State government.
Food support
Table 9 presents programme-wide estimates of costs and benefits reported in studies focused on programmes providing support services related to food.
Table 9: Cost and benefits reported for CBA and SROI studies focused on programmes providing support services related to food
Study | Country | Cost | Benefit | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
Courtney (2014) | UK | £1.7 million (across 3 case-study projects) | £11.8 million | 1 year Discount rate: 3.5% |
Yes | Yes |
Roussos et al. (2024) | UK | £376,184 | £5.9 million | 1 year | No | Yes |
Finally, Byrne and Just (2023) estimated that the average annual value of food pantry access to households is between $600 and $1,000, implying a total collective value of $19-28 billion of food pantry access across client households annually. This study tested several different model specifications (to control for endogeneity) and has a clearly-stated methodology, and therefore meets both criteria in the research team’s framework for robustness.
Domestic abuse prevention and support
Table 10 presents programme-wide estimates of costs and benefits reported in studies focused on programmes providing domestic abuse prevention and support services.
Table 10: Cost and benefits reported for CBA and SROI studies focused on programmes providing domestic abuse prevention and support services
Study | Country | Cost | Benefit | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
Hatch Regeneris (2020) | UK | £1.7 million | £7.9 million | 1 year | No | No |
Dowrick et al. (2022) | UK | £97,926 | £1.0 million | 1 year | Yes | Yes |
Rogers et al. (2018) | UK | £36,980 | £269,632 | 3 years | No | Yes |
Baraki and Lupton-Paez (2021) | UK | £13.1 million | £108 million | 1 year | Yes | Yes |
Youth work
Table 11 presents country-specific estimates of the value of the youth work sector in Ireland, Scotland and England.
Table 11: Country-specific valuation estimates of the youth work sector
Study | Country | Cost | Benefit | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
YouthLink (2016) | Scotland | £218 million | £656 million | 1 year | No | No |
Frontier Economics and UK Youth (2022) | England | N/A | £8.9 billion | Lifetime | Yes | Yes |
Indecon (2012) | Ireland | €993 million (total public funding over 10 years) | €2.2 billion | 10 years | Yes | No |
Advice services
Table 12 presents estimates from two studies focusing on advice services: Europe Economics (2018), which develops a valuation of the debt advice sector in the UK and Granger et al. (2024), which estimates the costs and benefits of a specific advice programme.
Table 12: Valuation estimates for studies focused on advice programmes
Study | Country | Cost | Benefit | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
Granger et al. (2024) | UK | £79,981 | £375,264 | 1 year | Yes | Yes |
Europe Economics (2018) | UK | N/A | £301-568 million | 1 year | Yes | Yes |
Fuel poverty
Table 13 presents programme-wide estimates of costs and benefits reported in studies evaluating programmes targeting fuel poverty.
Table 13: Cost and benefits reported for SROI studies focused on programmes targeting fuel poverty
Study | Country | Cost | Benefit | Duration of top-level benefit estimate | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
Nolden et al. (2021) | UK | £30,682 (total staff costs) | £308,636 | 5 years Discount rate: 3.5% |
No | Yes |
Oxford Economics (2024) | UK | £22 million (total spending) | £71 million (across 1 year) | 1 year 2023/2024 prices |
No | No |
Cost analyses
Table 14 presents valuation estimates from cost analyses, which focus on valuing the cost of specific social challenges. Studies valuing the costs of physical inactivity, domestic abuse, mental ill health, loneliness and unemployment were included in the REA.
Table 14: Estimates of the costs of specific social challenges in cost analysis studies
Study | Country | Category | Cost | Duration of costs | Sensitivity analysis? | Replicable? |
---|---|---|---|---|---|---|
CEBR (2014) | UK | Physical inactivity | £53.3 billion | Lifetime | No | No |
Oliver et al. (2019) | UK | Domestic abuse | £66.2 billion | 1 year | Yes | Yes |
The Sainsbury Centre for Mental Health (2007) | UK | Mental health (workforce) | £25.9 billion | 1 year 2006 prices |
No | Yes |
Cardoso and McHayle (2022) | England | Mental ill health | £300 billion | 1 year | Yes | Yes |
New Economics Foundation (2017) | UK | Loneliness | £2.5 billion | 1 year | No | Yes |
Peytrignet et al. (2020) | UK | Loneliness | £9,900 (per person) | 1 year 2019 prices |
No | Yes |
Fujiwara (2013) | UK | Unemployment | £10,700 (per person) | 1 year | Yes | Yes |
Other studies
Valuation estimates for the remaining studies (that did not fit into one of the above categories) are presented below:
- Dolan and Fujiwara (2012) estimated that the value of undertaking a part-time course for work is £754 (applying wellbeing valuation to data from the British Household Panel Survey) and £947 (using a survey sample of adult learners with contingent valuation questions). The research team has assessed this study as robust as sensitivity analyses (around model specification) are conducted and the methodology is clearly stated.
- Eisenberg and Hutton (2016) estimated that the lifetime value of youth services and programs provided by Boys and Girls Clubs is $13.8 billion, compared to $1.4 billion in operating costs, driven by parental earnings, improved grades and reduced alcohol use. The research team has assessed this study as robust as sensitivity analyses (around estimates of financial impacts and the impact of participation on outcomes) are conducted and the methodology is clearly stated.
- Pritchard et al. (2021) estimated the value of the Coronavirus Community Support Fund at £402 million across grant holder staff, volunteers, beneficiaries and government (through reduced use of public services). The research team has assessed this study as robust as sensitivity analyses are conducted (to present high, medium and low estimates) and the methodology is clearly stated.
- Crisp and Dayson (2011) estimated the value of the Middlesbrough Voluntary Development Agency based on willingness to pay, with an average response of £868 for development support, £1,217 for support for engagement and £742 for support for volunteers. The research team has assessed this study as not robust as no sensitivity analyses were conducted and the methodology could not be followed by the team (due to the absence of the full text of the contingent valuation survey questionnaire in the report).
- Courtney (2018) estimated the value of the Fielding and Platt project in Gloucestershire at £149,147 over five years, driven by outcomes related to personal wellbeing (such as helping to build more cohesive and trusting communities). The research team has assessed this study as not robust as no sensitivity analyses were conducted and the methodology did not include a justification of the financial proxies used for each outcome.
- Cnaan (2009) estimated the value of the average urban religious congregation at $476,663 to the local social economy, primarily driven by provision of congregation-based schools and social services (both of which reduce government spending). The research team has assessed this study as not robust as no sensitivity analyses were conducted, though the methodology was clearly set out and included a justification of the financial proxies used for each outcome.
3. How have the studies attempted to understand what is driving these values?
Across studies included in the review, the underlying drivers behind estimated values tended to vary by methodology.
Cost-benefit analysis and SROI
In general, most studies based on cost-benefit analysis or SROI followed a four-step process:
- Develop a mapping of key outcomes or benefits or outcomes by starting with the priorities and objectives initially set by the CSO, before validating these through stakeholder surveys and interviews.
- Identifying appropriate indicators to evidence each outcome (by measuring if a change had taken place in the outcome).
- Establishing impact by conducting a survey of programme participants covering selected indicators. For some studies, this included two waves of surveys: baseline and follow-up.
- Monetising outcomes through the use of financial proxies.
This typical approach is illustrated by Courtney (2014). The study authors conducted three storyboard workshops over a six-month period, in which participants worked to complete a Journey of Change diagram to identify a chain of events linking programme activities to short- and medium-term outcomes. After developing the outcomes mapping, the study authors then disseminated a survey (both online and in-person) to programme participants. The survey was designed to produce consistent measures across outcomes through use of Likert-scale questions. Findings from the survey were then used to monetise outcomes through four different valuation methods (equivalent cost, potential cost saving, revealed preference, stated preference).
One common stated limitation of studies in the REA was the consideration of only a subsample of potential benefits. Estimates of value from cost-benefit analysis or SROI were often restricted to individual benefits that were more straightforward or robust to quantify.
- Benefits that are more straightforward refer to benefits with entries in databases such as the UK Social Value Bank (published by HACT) or the CBA Excel tool published by the Greater Manchester Combined Authority. These databases have been designed for ease of use and provide a wide range of proxy variables based on academic and UK government research. In addition, all cost-benefit analyses and SROIs reviewed in the REA included estimates of avoided costs (savings due to reduced use of public services), as the unit costs of these services can be found in publications such as the Personal Social Services Research Unit’s Unit Costs of Health and Social Care or the Home Office’s report on the economic and social costs of domestic abuse.
- Benefits that are more robust refer to benefits for which the benefit transfer method is unlikely to introduce measurement error (for example, if the financial proxy was adopted from a highly similar intervention or estimated from a wide range of existing studies, allowing for uncertainty to be captured) or if additionality can be clearly assessed.[footnote 4] A small number of studies in the REA, such as Pritchard et al. (2021), assigned high, medium and low certainty ratings to specific benefits to explicitly capture the authors’ assessments of robustness.
For example, when considering lifetime economic benefits of Boys & Girls Clubs, Eisenberg and Hutton (2016) restrict the study to only consider education, health (substance use and physical activity), juvenile justice and parental earnings. Benefits on teenage pregnancy, financial literacy, leadership and citizenship were not considered due to the absence of existing studies linking these measures to long-term economic outcomes and the absence of evidence on additionality (how investment by Boys & Girls Clubs differs from other types of spending).
Overall, half of the studies in the REA included benefits to physical and mental health in their valuation estimates. Most of these benefits were quantified through a top-down approach, multiplying the expected prevalence of physical and mental health conditions (based on underlying population characteristics) with data on the unit cost of services. This top-down approach was also widely used by studies including outcomes related to economic output (16 studies), cost savings from reduced use of criminal justice services (mentioned in 13 studies), cost savings from reduced substance misuse (7 studies) and cost savings from wider government benefits (6 studies).[footnote 5] On the other hand, a bottom-up approach, with changes in outcomes measured directly through stakeholder surveys, was most common to value cost savings from reduced use of housing and homelessness support services (6 studies). Out of the 24 studies that included a valuation of wellbeing, eight used life satisfaction question, seven used mental wellbeing questions (following WEMWBS), two used wellbeing valuation and the remaining studies reflected a range of approaches that did not map to a standard methodology (emotional wellbeing, resilience, self-esteem and self-confidence).
Finally, in four studies, the research team was not able to identify specific sources for the financial proxies used in analysis. These included Walk et al. (2015), YouthLink Scotland (2016), Indecon (2012) and Davies et al. (2021). While most cost-benefit analyses and SROI analyses included an appendix table detailing specific outcomes and financial proxies (with some SROI analyses including the entire SROI spreadsheet as an appendix), none of these four studies did so, making the research team less confident that the valuation estimates reported in these studies could be reproduced.
Other methodologies
Studies reviewed in the REA that did not use cost-benefit analysis or SROI generally did not explore in detail the specific factors driving values.
- Studies applying the wellbeing valuation methodology, such as Fujiwara and Vine (2015), used longitudinal datasets for multivariate analysis, without direct engagement with stakeholders or service users.
- Some studies applying the contingent valuation methodology to measure user willingness to pay, such as Loubière et al. (2020) and Dolan and Fujiwara (2012), included regression analysis examining how stated WTP varied by individual characteristics. However, this was not true of all studies: for example, Frew et al. (2012) included a single WTP question in the baseline and follow-up survey without conducting multivariate analysis or asking respondents to explain why they had chosen a specific value. In addition, Crisp and Dayson (2011) mentioned the use of a stakeholder survey with questions that enabled contingent value estimates without including the survey questionnaire as an appendix or discussing how survey results were interpreted and analysed.
4. What is the quality of individual studies that have developed/applied these techniques?
The research team applied four different frameworks to assess the quality of studies in the REA, depending on the methodology used in the study. These frameworks were used to assign an overall score to each study, ranging from 0% to 100%. Studies scoring above 70% were classified as high quality, 40% to 70% as medium quality, and below 40% as low quality. A more detailed description of the quality assessment frameworks used can be found in Appendix 2.
Note that sample size is not included in quality assessment frameworks, as no clear standard exists on appropriate minimum sample sizes for the methodologies considered in the REA. Out of the 56 studies, all but 4 included a sample size of at least 100 respondents.
- Standing Together and Solace Women’s Aid (2022) conducted a cost-benefit analysis based on 2 interviewed participants (though these were explicitly presented as case studies),
- Dowrick et al. (2022) used a survey of 16 respondents to inform the SROI.
- Arvidson et al. (2014) gathered data from 39 participants from exit evaluation forms, qualitative interviews and existing programme data.
- Nguyen et al. (2024) included all 89 patients experiencing homelessness who accessed dental services as part of the community dental care model in the economic evaluation.
Table 15 presents a breakdown of studies into one of three quality categories, while Table 16 presents a breakdown by methodology and quality.
Table 15: REA studies categorised by quality
Quality category | Number of studies (% of all studies) |
---|---|
High quality | 23 (41%) |
Medium quality | 21 (38%) |
Low quality | 12 (21%) |
Table 16: REA studies categorised by methodology and quality
Methodology | High quality studies | Medium quality studies | Low quality studies |
---|---|---|---|
SROI | 13 | 10 | 0 |
CBA and cost analysis | 3 | 9 | 11 |
Contingent valuation | 2 | 1 | 1 |
Other methodologies | 5 | 1 | 0 |
For SROI studies, the main factors differentiating high- and medium-quality studies were 1) use of ex-ante and ex-post observations, and 2) sensitivity analysis. While many SROI studies used baseline and follow-up surveys to track outcomes over time, very few studies sought to measure outcomes before participation in the programme/service started (or otherwise establish a comparator or control population). This may have been due to data collection challenges, as SROI studies were often commissioned after a programme had already begun (or had been in place for many years), and study authors were therefore dependent on programme-collected data to establish additionality. In addition, high-quality SROI studies included a sensitivity analysis section where changes to deadweight, attribution and stakeholder population were examined to understand the impact these changes had on the overall SROI ratio.
For cost-benefit analyses, fewer than half of the studies in the REA fully adhered to Green Book best practice in evaluation by considering discounting (11 studies of 23) or consideration of distributional analysis through deadweight, displacement, leakage or substitution (6 studies of 23). Studies were more likely to include sections discussing risk and uncertainty (17 studies of 23), but did not always explicitly consider the impact of alternate assumptions (12 studies of 23) or sensitivity testing (11 studies of 23). Finally, only 7 studies of 23 explicitly factored in drop-off rates when estimating the value of benefits over time.
For contingent valuation, high-quality studies were distinguished by clearly stating the WTP regression model, pre-experiment user testing to evaluate the quality of contingent valuation questions and sensitivity analysis (around testing different model specifications or control groups).
5. What are the limitations of the different types of valuation techniques applied to the civil society sector?
Cost-benefit analysis and SROI
Both cost-benefit analysis and SROI are widely used for evaluating the impact of services provided by civil society organisations. CBA is the dominant evaluation method in public policy, used to determine whether the benefits of an intervention outweigh its costs. It provides a clear, standardised framework, making it useful for comparing policy options.
While SROI builds on CBA, it has a different focus, being particularly useful for organisations that deliver social services or operate in sectors where well-being, inclusion, and non-market benefits are central concerns, and go beyond financial metrics. It allows these organisations to demonstrate the social value of their interventions in financial terms, facilitating engagement with funders, commissioners, and policymakers (Cabinet Office 2012). The methodology starts by identifying key stakeholders, defining expected outcomes, and monetising these outcomes using a combination of established economic techniques and sector-specific proxies.
One challenge of CBA is its reliance on market-based proxies for non-market benefits, such as community well-being, social cohesion, and mental health improvements, as CBA requires that all costs and benefits (whether tangible or intangible) be expressed in monetary units. Gains in wellbeing and confidence are particularly difficult to gauge via CBA (Arvidson et al. 2013). In turn, these gains might be particularly significant in sectors such as homelessness and sports practice.
On the other hand, SROI can often make direct comparisons across projects more challenging due to its reliance on diverse stakeholder-driven indicators. While both methods use proxies for valuing intangible benefits, the cost-benefit analyses in our review were more likely to draw on standardised economic valuations, while the SROI analyses relied on estimates from previous SROI analyses covering similar interventions or sector-specific proxy databases. Moreover, SROI tends to rely on small sample sizes, and it is often difficult to conduct tests of statistical significance of its results (Royal Institution of Chartered Surveyors (RICS) 2020). This necessarily renders the comparison and interpretation of SROI ratios between projects of the same subsector, as well as projects with a different scale or different subsectors, much more difficult to assess.
Both cost-benefit analysis and SROI studies in the REA included an avoided cost methodology to quantify some societal benefits. These generally included cost savings due to reduced usage of NHS services or lower likelihood of entering the criminal justice system. Avoided costs (and similar cost-based methodologies such as substitute or replacement costs) tend to be easy to measure, but may underestimate the true value of a specific service. For example, not all individuals facing mental health challenges will use mental health services (Lubian et al. 2016), and GP visits may cover a wide range of health issues not directly tied to the absence of the service. In addition, it may be very difficult to establish a causal relationship between the CSO-provided service and specific physical or mental health outcomes, with studies quantifying the cost of a specific issue (such as homelessness) in the REA frequently relying on extensive simplifying assumptions across the entire UK population.
More generally, the reliance of both cost-benefit analysis and SROI on estimates of value from previous research leads to potential issues with measurement error and generalisation error (Rosenberger and Stanley 2006). Measurement errors relate to the quality of original research (almost no cost-benefit analyses and SROI studies in the REA included a discussion of quality of cited studies, though it is likely that researchers sought to select the most relevant or high-quality study from which to base unit costs and benefits), and generalisation errors relate to validity (differences in the original and current studies). Function transfers, which apply an estimated function from the original study rather than using summary statistics directly, are one common way of addressing generalisation error, though no cost-benefit analyses or SROI studies in the REA followed this approach (Rolfe et al. 2015).
Contingent valuation and wellbeing valuation
Other than bottom-up approaches focused on individual benefits (used by both cost-benefit analysis and SROI), contingent valuation and wellbeing valuation can also be used to value non-market goods by capturing the total value that users place on a specific service. The advantage of these more holistic approaches is that they can capture previously under-reported benefits which do not have reasonable financial proxies or are more difficult to collect data on (through participant surveys).
Previous research has found that wellbeing approaches can often be higher than WTP (up to eight times higher), identified through contingent valuation due to differences in study design participant biases (Dolan and Metcalfe 2008). More sophisticated research designs such as iterative bidding have been shown to decrease the gap between valuations produced by the two methodologies (Sayman and Öncüler 2005). However, little research exists around the appropriateness of each methodology for a specific sector of interest, and this is likely to be driven by other considerations such as sample size.
While evidence reviews of contingent valuation have found that the methodology holds both content validity and construct validity (Bishop and Boyle 2019), this is dependent on effective research design.[footnote 6] One common critique of contingent valuation is that it produces results that are driven by swings in individual decision-making: it is difficult to place a value on a specific service solely based on how individuals respond to a specific set of choices or their stated preferences (Fujiwara et al. 2017). More specifically, potential biases can include:
- Hypothetical bias: individuals may respond to contingent valuation questions in a survey differently to how they would act in real life (Ajzen et al. 2004).
- Anchoring bias: individuals may respond to contingent valuation questions based on specific numerical prompts or reference points: for example, asking a Yes/No question based on agreeing or disagreeing with a specific valuation estimate prior to an open-ended WTP question) (Green et al. 1998).
- Information bias: individuals may be influenced by information provided in the contingent valuation survey: for example, through the specific way a programme or service is described (Ajzen 1996).
- Protest responses: some individuals may report a WTP of zero, potentially due to specific attitudes or beliefs leading them to not want to pay for a programme or service even if they place a positive value on it (Frey and Pirscher 2019).
While these biases do not completely invalidate findings from contingent valuation studies, addressing the biases requires careful research design, including following best practices such as pre-testing survey questions through qualitative research, designing scenarios to be incentive-compatible and applying appropriate econometric modelling. In addition, choosing a design for contingent valuation questionnaires may require difficult trade-offs to be made. For example, research designs that reduce cognitive burden, such as single-bounded dichotomous choice, may be more susceptible to anchoring bias (Pearce and Özdemiroglu 2002). Finally, even if a study is well-designed, the resulting valuation estimates may not be applicable to other programmes or services due to generalisation error linked to benefit transfer (discussed above) (Lawton et al. 2018).
The alternative approach taken by wellbeing valuation (specifically subjective wellbeing, or SWB) centres around asking research participants directly how they feel, with these responses (intended to capture subjective wellbeing) captured through the Warwick-Edinburgh Mental Wellbeing Scale and other similar approaches (HM Treasury 2021). This approach is intended to link the value of a specific service to the impact it has on wellbeing (or quality of life), and a broad range of national surveys (such as the Annual Population Survey and Time Use Survey) currently include wellbeing questions.[footnote 7] Some have criticised wellbeing valuation as insufficient on construct validity grounds, as the correlation between money and wellbeing may be weak for some individuals (Corry 2018). In addition, questions asking about wellbeing potentially face issues around salience (depending on the time frame of assessment), scaling (how interpretations of wellbeing scales may change between respondents and over time) and selection (due to bias in who chooses to complete surveys (Dolan, Layard, and Metcalfe 2011). Finally, it is more difficult to apply lower values of wellbeing scales to policy evaluation: for example, previous research has found that changes in lower levels of the SWEMWBS scale do not have a statistically significant impact on life satisfaction (Fujiwara et al. 2021).
Travel cost
The travel cost methodology is primarily used in environmental economics to value recreation services based on collecting respondent data on site visits and applying these to standard travel costs. The key challenges faced by travel costs include data collection (travel cost data is not routinely collected), recall bias (where standard deviation of reported participation increases with longer recall periods), lack of consideration for intertemporal substitution (where individuals may visit their preferred site in the future rather than a substitute site) and differences in underlying preferences between people with high and low travel costs. Note that this last challenge is likely to be less relevant for services provided by civil society: most people do not choose where to live based on the presence of nearby services (with one potential exception being services provided by faith-based organisations).
6. Within this context, are there certain circumstances that make a valuation technique preferable over others? If this is the case, what are these circumstances, and what is the preferred technique?
This section builds on both findings from the REA as well as UK Government guidance (Green Book and supplementary guidance), textbooks covering non-market valuation and a broader review of studies using valuation techniques outside of the civil service sector.
Cost-benefit analysis and SROI
Historically, cost-benefit analysis focused primarily on tangible costs and benefits (as these were more easily translatable into monetary values (New Economics Foundation 2013). By turning costs and benefits of policies into monetary values, these policies become more easily comparable (and can be prioritised based on benefit-cost ratios if desired). Many cost-benefit analyses in recent years have been extended to cover broader societal and environmental impacts (where relevant); this is known as social cost-benefit analysis. Almost all cost-benefit analyses in the REA could be classified as social cost-benefit analysis (except for exploratory studies such as Pleace and Culhane (2015), which only consider reductions in service use as potential benefits due to data availability).
Social cost-benefit analyses and SROI are often very similar in practice, with SROI typically including a broader range of specific outcomes related to wellbeing (for example, the costs of reduced self-esteem and personal resilience as a result of unemployment) and wider social benefits (such as stronger relationships, social capital or civic engagement). Therefore, SROI may be more appropriate for specific services provided by CSOs that are likely to have an outsized impact on wellbeing (and therefore improvements in wellbeing may comprise a larger proportion of the value produced by these services), as well as services that reach a wider range of stakeholders (allowing benefits accruing to individual stakeholders as well as across stakeholder networks to be quantified). This should be informed by stakeholder engagement and development of a Theory of Change, which clearly sets out potential outcomes.
Stated and revealed preference methodologies
Due to the potential limitations of benefit transfer, if project resources allow, then primary research methods such as contingent valuation, wellbeing valuation or travel costs should be considered. This is particularly true if potential available studies for benefit transfer do not align on project scope, project outcomes, geography, size of population or underlying population characteristics, or if the quality of the methodology cannot be assessed due to a lack of detail.
As discussed in the previous section on Research Question 6, revealed preference methodologies are generally preferable to stated preference methodologies as they avoid potential biases introduced when asking about real-world decision-making in the context of a survey. However, as shown through the REA, revealed preference techniques are rarely used to produce valuation estimates within the civil society sector due to a lack of relevant data (on travel costs or market prices).[footnote 8] In practice, given difficulties around data collection, stated preference methodologies tend to be the only realistic way of producing valuation estimates, given their flexibility and extensive evidence base around appropriate research design.
Wellbeing valuation
In terms of measuring wellbeing, the two most common options are mental wellbeing metrics (such as WEMBWS) and life satisfaction (such as ONS-4). While WEMWBS (and its shorter variant SWEMWBS) have been widely validated across different populations, it may not effectively detect changes in mental wellbeing if respondents have very poor mental health – lower ends of the scale do not have a statistically significant relationship with life satisfaction (Fujiwara et al. 2017). On the other hand, while questions on life satisfaction are easy for respondents to understand and closely correlated to other dimensions of wellbeing such as employment and good health, one potential issue is that it may lead to double-counting of benefits (Frijters and Krekel 2021). In addition, a single service provided by a CSO may not lead to significant changes in overall life satisfaction, making it difficult to develop a robust estimate of value unless a large sample size is used.
Potential framework
One potential framework for selecting valuation techniques, based on Pearce and Özdemiroglu (2002), HM Treasury (2021) and Saraev et al. (2021), is as follows:
- Cost-benefit analysis and SROI are the most appropriate valuation methodologies if:
- Project resources are limited (as a desk-based review with benefit transfer can be used instead of primary research).
- There is a robust evidence base of evaluations of similar programmes (this means that benefit transfers are more likely to meet criteria for reliable transfers).[footnote 9]
- Valuation estimates are required for individual benefits (such as improved physical health or avoided public services costs)
- Wellbeing valuation is most often incorporated as part of a wider social CBA or SROI.
- For example, PwC (2018) includes both avoided costs of public services (drug and alcohol, mental health, NHS) as well as wellbeing impacts to individuals sleeping rough moving to secure housing.
- Life satisfaction questions should be used for larger, more permanent shifts in wellbeing, while mental wellbeing questions should be used for transitory changes that may not have an impact on overall life satisfaction.
- Revealed and stated preference methodologies should be selected over cost-benefit analysis and SROI if:
- Project resources are sufficient to support primary research (with a representative sample of programme participants or service users).
- The evidence base of evaluations of similar programmes is relatively sparse.
- The service or outcome has very small or very complex changes.
- Additionality is difficult to establish (cost-benefit analysis and SROI will often include separate waves of baseline and follow-up surveys to measure outcomes, but this is generally not sufficient to establish causality).
- Travel cost methodology is most appropriate for services typically provided for free (which could potentially lead to a high rate of protest responses if conducting a contingent valuation survey).[footnote 10]
Finally, the end use of estimates of value should be considered, as it may be more valuable to use a valuation methodology previously used in the evidence base to allow for direct comparisons. In these cases, a sense of scale (i.e. relative magnitude) based on a common benchmark metric may be more important than selecting the most robust methodology.
7. What are the options for calculating and calibrating estimates of value for a selection of services or outcome case studies? What (primary or secondary) data will be required to undertake this analysis? What are the opportunities, challenges and limitations of these approaches?
The research team selected three case studies to calculate and calibrate estimates of value, based on the civil society subsectors least commonly covered in the existing evidence base. These include training provision, food banks and advice services. For each case study, we present a list of options for estimating value, including the data required for the analysis and potential challenges/limitations.
Case study 1: Food banks
Food banks are crucial in mitigating food insecurity and hunger, while also offering essential support to vulnerable populations (Byrne and Just 2023). Nonetheless, assessing their overall societal value remains challenging and requires robust methodological approaches. This case study aims to propose the most appropriate valuation method for measuring the economic and social net benefits of food banks in the UK.
A variety of valuation techniques can be employed to measure the impact of food banks, including cost-benefit analysis, social return on investment, contingent valuation and willingness to pay. While these methods can offer insights into the social and economic benefits of food banks, they often rely on subjective information or self-reported data, which may introduce respondent bias (Byrne and Just 2023). For example, as food banks distribute food for free, respondents may struggle to determine their willingness to pay, leading to inconsistencies in valuation[footnote 11]. Relatedly, conducting a wellbeing valuation can be challenging, as individuals may find it difficult to quantify improvements in life satisfaction, health, and social inclusion resulting from food bank use (Dolan, Peasgood, and White 2008).
Additionally, conducting CBA or SROI assessments for food banks presents challenges due to limited data on individual costs and benefits. Moreover, attributing causality is complex, given the broad impact of food banks and the diverse range of stakeholders involved, including volunteers, donors, and recipients. Therefore, more sophisticated primary data collection is needed. In this sense, even though there is evidence linking food insecurity to adverse physical and mental health outcomes, assessing the specific impact of food banks in the UK requires detailed data on users’ overall diets and other food sources, which can be obtained with revealed preferences methods (Byrne and Just 2023).
To address these limitations, we propose a non-market valuation approach using the travel cost method. Commonly applied to environmental and recreational goods and services, this methodology is particularly suited to valuing food banks, as it estimates their societal impact based on actual user behaviour (Byrne and Just 2023).
The travel cost method offers several advantages. First, as it captures revealed preferences, the valuations are derived from observed actions of food bank users, rather than from hypothetical survey responses, thereby reducing social desirability bias. Second, it reflects users’ costs by considering travel time and transport expenses to access food banks, providing a tangible measure of their value. Third, this method allows for demand estimation by analysing variations in frequency of visits and travel distance, allowing for nuanced understanding of user engagement.
According to the United Kingdom Food Security Report 2024 (Department for Environment, Food & Rural Affairs 2024), 3.3% of households accessed food banks in FYE 2023. However, these statistics were higher for households with ‘low’ and ‘very low’ food security levels, with 14% and 31% respectively for the same period of time. This link between food insecurity and food bank use shows that food banks are more heavily relied upon and spatially concentrated in lower-income areas. Consequently, the travel cost method is particularly useful, as it facilitates spatial analysis and comparison of service accessibility while complementing qualitative assessments with a transparent and quantifiable economic measure of food banks’ impact.
The relevant cost variables for this methodology include travel distance, travel time, monetary travel costs, and opportunity costs. Travel distance refers to how far users need to travel to access food banks, and this information can be obtained from administrative data of collaborating food banks and using data from Google Maps API. Cost of travel time refers to the time spent accessing the food banks, this information can be obtained from the Department for Transport’s Transport analysis guidance (TAG), which provides values for travel time vehicle operating cost savings for commuting and other journey purposes.
Monetary travel costs involve the transport expenses that food bank users incurred, such as public transport fares, fuel costs, and parking fees. These can be gathered using local transport data, fuel cost indices from ONS, and parking charge rates. For instance, vehicle costs can be estimated at 25p per km based on Day and Smith (2018). Another data source is the National Travel Survey (NTS), which provides data on personal travel behaviours in the UK. The opportunity cost is the value of time spent travelling to the food bank instead of working, which can be measured using the National Minimum Wage and National Living Wages rates as a proxy. The time spent travelling to food banks, along with the wage rate, can help calculate the monetary value of this time.
Additionally, following the approach of Byrne and Just (2023), several key variables should be considered. The number of annual trips to food banks per individual can be obtained from food banks’ administrative records. Demographic and household characteristics can also be obtained from these records, as well as from national surveys conducted by the Office for National Statistics (ONS). However, only data from the specific region where the food banks operate should be used as a proxy. Finally, food bank users’ addresses can be acquired through food banks, either from their administrative data or by facilitating direct contact with users.
The estimated time required for this valuation exercise is 40–50 days. Mapping food bank locations across the UK will take approximately 5 days. Survey design and piloting will also take 5 days. Data collection, the most resource-intensive phase, is expected to take 15–20 days and involves gathering travel cost and socioeconomic data. Cleaning and processing this data will require an additional 5 days. Modelling and the travel cost method calculations will take 10 days, including demand estimation based on travel behaviour and opportunity costs. Finally, interpretation of results, report writing, and validation will take 5–10 days to ensure findings align with policy objectives and funding considerations.
Several risks may impact the valuation process. Data availability is a key concern, as food banks operate independently and may have incomplete records on user visits, requiring alternative estimation methods. Confidence in the extent to which this information is typically recorded varies, as data collection practices differ across food banks. Additionally, organisations may be understandably reluctant to share sensitive user data for research purposes, further complicating data access and accuracy. Transport cost variability across regions could also affect calculations, necessitating adjustments for regional fare differences and fuel prices. Additionally, broader economic conditions influence household food security, making it difficult to isolate the direct impact of food banks. Sensitivity analyses will be needed to account for these uncertainties. Delays in accessing government datasets on food insecurity and household spending could further slowdown data compilation, requiring contingency planning to ensure timely completion.
Case study 2: Job training provision
Civil society organisations play a vital role in human capital development and lifelong learning by providing job training, particularly for marginalised groups who face barriers to accessing traditional education and employment services (European Training Foundation 2024). These organisations bridge gaps in public provision by offering tailored programmes, wraparound support services, and advocacy for policy reforms that enhance workforce development, reduce educational disparities, and promote social inclusion. By leveraging community networks and local expertise, civil society organisations ensure training initiatives are accessible, responsive to labour market demands, and aligned with evolving economic and technological changes. Their contributions not only support individual employability but also strengthen broader socio-economic resilience.
Job training provision is a key strategy for enabling low-wage workers to enter and progress within the labour market. Organisations such as Smart Works provide unemployed women with interview training and career coaching to enhance their employment prospects, while Volunteer It Yourself equips young people with trade and building skills, supporting them in gaining vocational accreditation, securing work placements, and accessing apprenticeships. These initiatives demonstrate how civil society organisations play a vital role in equipping individuals with the skills needed to achieve sustainable employment and career progression.
According to Gasper et al. (2020), hard skills training has measurable benefits for participants, yielding higher returns not only for individuals but also for taxpayers and society as a whole. Given these findings, the authors recommended prioritising investment in career advancement, high-quality employment, and economic mobility for New Yorkers, rather than solely focusing on immediate job placement. Our case study seeks to apply a similar valuation methodology to assess the return on investment of job training provision in England.
Following the approach of Gasper et al. (2020), we propose the use of Cost-Benefit Analysis (CBA) as the most appropriate method for evaluating job training provision, particularly within civil society-led initiatives. The authors employed a Return on Investment (ROI) analysis to compare the post-training earnings of participants across six industry-focused training initiatives with those of a matched comparison group who did not receive training.
CBA is the most suitable approach for assessing the value of job training provision in England, as it quantifies both the economic and social benefits of training programmes relative to their costs. This enables a comprehensive evaluation of their value to key stakeholders, including individuals, employers, and society. This method is designed to capture a wide range of benefits, such as increased earnings, higher tax revenues, reduced reliance on social benefits, including improved productivity, social mobility, and reductions in unemployment-related costs, and overall economic growth. Furthermore, by monetising both costs and benefits, CBA facilitates direct comparisons between training initiatives, providing policymakers with valuable insights for resource allocation.
Another key advantage of CBA for job training valuation is additionality—the extent to which observed benefits can be attributed to the intervention rather than external factors. While job training participants may still engage in other activities that influence productivity, earnings, or employability, these effects can often be more clearly isolated compared to broader employment or education policies. As demonstrated by Goodspeed (2009) and Walk et al. (2015), this challenge can be addressed by conducting surveys, interviews or focus groups on participants’ employment history and job search efforts before taking part in the training, alongside statistical methods such as difference-in-differences analysis to compare outcomes with non-participants.
In England, the availability of comprehensive and reliable data allows for accurate estimation of key inputs for conducting a CBA. These include government funding for job training programmes, the number of programmes and local authorities delivering them, wage progression, employment rates, and public spending savings. Government departments have consistently used CBA to evaluate training and education-related interventions. For example, the Department for Education (2021) conducted a CBA to measure the return on investment of Further Education (FE) qualifications started in 2018–19. Similarly, the Department for Work & Pensions (2022) applied CBA alongside a randomised controlled trial to assess the Group Work programme—a 20-hour job search skills workshop designed to enhance self-efficacy, self-esteem, and social assertiveness among jobseekers.
Given the wide availability of relevant data in this sector, alternative valuation methods would be less appropriate for this subsector. Social Return on Investment (SROI), for example, often considers multiple stakeholders and social benefits, whereas job training provision typically focuses more on individual economic outcomes. This method also often lacks standardisation, making cross-programme comparisons challenging. Given the wide range of interventions aimed at enhancing individual economic outcomes in the labour market, such as subsidised employment programmes, vocational training, small business grants, and soft skills training, ensuring comparability is crucial. Contingent valuation is better suited for assessing non-market goods, such as environmental resources, and is less effective in capturing the true economic impact of job training. Meanwhile, wellbeing valuation does not provide direct monetary estimates, which CBA can offer.
The first step is to identify the job training provision initiatives to be considered for the case study. To achieve this, the Mapping National Employment and Skills Provision tool, developed by the Local Government Association (LGA), can be utilised. This tool offers a detailed overview of job provision programmes across England, outlining their locations, funding commitments, and the respective funding agencies.
Among the costs that could be used in this analysis are the direct costs of the programme to individuals and/or the government, including training delivery expenses such as learning materials, instructor salaries, and participant support costs such as stipends. Data on these costs can be obtained from the wide range of funding streams used by local authorities, as outlined in the LGA Employment and Skills Provision Survey – 2022/23 (Local Government Association 2023). These include apprenticeships funding, Adult Education Budget, core funding, Multiply, community learning, DWP Flexible Funds, Community Renewal Fund, Free Courses for Jobs, bootcamps, traineeships, UK Shared Prosperity Fund (UKSPF). Additionally, opportunity costs should be accounted for, as participants may need to reduce their working hours or temporarily leave employment to undertake training (Gasper et al. 2020). This represents potential lost earnings, which can be estimated using employment and earnings data from the Office for National Statistics (ONS) and HM Revenue and Customs (HMRC).
The benefits of job training programmes span fiscal, economic and social benefits (Gasper et al. 2020). Fiscal benefits include increased tax revenues at both national and local levels due to higher post-training wages, which can be estimated using HMRC income tax records. Economic benefits encompass improved employment rates, increased earnings, and reduced public expenditure resulting from reduced reliance on welfare benefits. These can be calculated using longitudinal labour market data from the Labour Force Survey and earnings progression studies conducted by the Institute for Fiscal Studies (IFS). Furthermore, social and wellbeing benefits include enhanced productivity, greater social mobility, improved mental health and job satisfaction. Supporting evidence for these impacts can be sourced from studies by organisations such as the Resolution Foundation, as the What Works Centre for Wellbeing and the Joseph Rowntree Foundation.
Mapping the services provided across England is expected to take 5 days. This involves identifying job training programmes delivered by civil society organisations using the Mapping National Employment and Skills Provision tool and reviewing associated funding streams. Survey design and piloting will also require 5 days, as it involves developing survey instruments to collect data on programme participation, costs, and post-training employment outcomes. Piloting is necessary to ensure data validity and reliability.
Data collection, which is one of the most resource-intensive tasks, is estimated to take 15 to 20 days. This includes gathering data from local authorities, civil society training providers, and government sources. Furthermore, most databases do not differentiate between civil society and non-civil society providers, meaning that additional time and resources should be allocated to mapping these distinctions within the project’s budget and timeline. This phase also requires extensive stakeholder engagement, including coordinating with relevant organisations and distributing surveys to participants.
Financial data compilation and cost estimation will require approximately 10 days, involving the collection of expenditure data from service providers and analysis of public financial reports. The economic modelling and analysis of the CBA is expected to take 10 days. Finally, the interpretation of results, report writing, and validation with stakeholders will take an additional 5–10 days. This ensures findings are accurately interpreted, align with policy and funding priorities, and undergo appropriate verification with key stakeholders.
Several risks should be considered as these might affect the proposed timeline. First, some training providers may lack detailed financial records, or the data may be inconsistently reported, leading to gaps that require estimation or alternative data sourcing. Additionally, differences in regional training provision and funding structures may create variability that complicates direct comparisons between programmes. Second, additionality can be challenging as it can be complex to accurately assign training outcomes to programme participation. Third, the quality of results depends on the robustness of underlying assumptions, and any miscalculations in deadweight, displacement, or substitution effects could skew findings.
Case study 3: Advice services
Advice services provide crucial support to individuals and communities, particularly those experiencing financial hardship, legal difficulties, or social exclusion and inequality. These services encompass a wide range of support, including legal assistance, financial and debt advice, employment guidance, health and wellbeing support, housing and homelessness assistance, immigration and refugee aid, and citizens’ rights advocacy.
This case study focuses on assessing the economic and social impacts of financial advice services in the UK using Social Return on Investment (SROI). Financial advice services positively impact individuals by enhancing productivity, improving mental and physical wellbeing, supporting creditor recovery and more efficient debt resolution, and reducing the risk of falling into persistent debt cycles. Their importance is particularly evident given that, as of August 2024, total personal debt in the UK reached £1.86 trillion, with average household debt standing at £65,665 (The Money Charity 2024). These services also contribute to broader societal benefits, including a lower risk of homelessness, reduced desperation-related crime, and stronger family relationships (Europe Economics 2018).
The most suitable valuation method for financial and debt advice services is Social Cost-Benefit Analysis (SCBA), as it captures both economic and social benefits in a structured and quantifiable way. Additionally, the methodology engages relevant stakeholders, such as service users, providers, and policymakers, to identify key benefits, ensuring a more thorough assessment of impact. This is particularly important, given that many of the positive outcomes of financial advice services lack direct market prices. For instance, it can lead to better mental and physical health, improved wellbeing and stronger relationships with family and friends (Europe Economics 2018).
We are aware SROI is widely recognised as a robust methodology for measuring the value of advice service programmes and interventions in the UK. It was applied to assess the social return of Citizens Advice services, a telephone-based service providing debt, employment, welfare and consumer advice in Scotland (Social Value Lab 2014). It was also used to measure the value of StepChange Debt Charity, which helps over hundreds of thousands of families in debt every year in the UK (Clifford et al. 2014).
Moreover, Europe Economics (2018) implemented an SROI to measure the economic and social benefits of the Money Advice Service (now known as MoneyHelper). This financial debt advice service is designed to enhance public understanding of financial matters and the UK financial system. Relatedly, a report conducted by ERS Research & Consultancy (2018), evaluated the outcomes of MyBnk Money Works, a financial education programme delivered across England and Wales. This initiative aims to improve young people’s financial knowledge, digital skills, and confidence in managing money.
However, while SROI is frequently referenced in the literature, it is often used interchangeably with SCBA. In this way, SCBA remains the preferred method for evaluating financial advice services, as it captures the same range of benefits while aligning with UK Government Green Book guidelines and ensuring comparability with existing literature.
Other valuation methods are less suitable for measuring the net benefits generated by financial advice services for several reasons. Wellbeing valuation, for instance, concentrates solely on life satisfaction and lacks a detailed breakdown of both financial and social impacts. Contingent valuation, meanwhile, measures how much individuals are willing to pay for a service, which is not relevant for free financial advice services aimed at vulnerable populations who may struggle to assign a financial value to this support.
The key direct costs include service delivery expenses, such as staff salaries, training, and operational overheads, which can be sourced from organisational financial statements. Operational costs, covering IT systems, outreach, and marketing, are typically available through budget reports. Additionally, indirect costs arise from the administrative burden on creditors and financial institutions engaging with advised individuals. Research suggests that debt collection agencies charge around 15% of recovered amounts as fees, which creditors might save if effective financial advice leads to better-managed repayments (Europe Economics 2018).
The benefits of financial advice services extend beyond direct financial improvements, positively impacting individual well-being, economic participation, and demand for public services. One significant benefit is the reduction in financial distress, which can be measured through self-reported well-being surveys; the source for this is the Office for National Statistics (ONS) and other relevant organisations, such as StepChange Debt Charity. Financial advice also helps individuals participate more effectively in the economy by reducing loan defaults and improving credit scores, with supporting data available from anonymised financial records and national debt statistics. Another key benefit is a reduced reliance on welfare benefits and NHS mental health services, as financial stability lessens the pressure on public resources; the Department for Work and Pensions (DWP) and the National Audit Office (NAO) provide insights into these impacts. Additionally, financial advice contributes to improved employment outcomes by reducing absenteeism and presenteeism, as shown in reports from the Institute for Fiscal Studies (IFS) and the Chartered Institute of Personnel and Development (CIPD).
The estimated time required for this analysis is 40-50 days. The most resource-intensive task will be stakeholder engagement and data collection, which is expected to take 15-20 days. This process involves coordinating with financial advice organisations, conducting interviews, and distributing surveys to service users. Financial data compilation and cost estimation will require approximately 10 days, as it involves gathering expenditure data from service providers and public financial reports. The SROI calculations and sensitivity analysis, including the valuation of social outcomes and adjustments for additionality and deadweight, are estimated to take 10 days. Finally, report writing and validation with stakeholders will take an additional 5-10 days to ensure that findings are accurately interpreted and align with policy and funding priorities.
Works cited
Ajzen, Icek. 1996. ‘The Social Psychology of Decision Making’. In Social Psychology: Handbook of Basic Principles, 297–325. New York, NY, US: The Guilford Press.
Azjen, Icek, Thomas C. Brown, and Franklin Carvajal. 2004. ‘Explaining the Discrepancy Between Intentions and Actions: The Case of Hypothetical Bias in Contingent Valuation’. ResearchGate, October. https://doi.org/10.1177/0146167204264079.
Arvidson, Malin, Fraser Battye, and David Salisbury. 2014. ‘The Social Return on Investment in Community Befriending’. International Journal of Public Sector Management 27 (3): 225–40. https://doi.org/10.1108/IJPSM-03-2013-0045.
Arvidson, Malin, Fergus Lyon, Stephen McKay, and Domenico Moro. 2013. ‘Valuing the Social? The Nature and Controversies of Measuring Social Return on Investment (SROI)’. Voluntary Sector Review 4 (1): 3–18. https://doi.org/10.1332/204080513X661554.
Baker, Colin, Paul Courtney, Katarina Kubinakova, Liz Ellis, Elizabeth Loughren, and Diane Crone. 2017. ‘Gloucestershire Active Together Evaluation’. https://eprints.glos.ac.uk/4321/3/Gloucestershire%20Active%20Together%20Evaluation%20Final%20Report.pdf.
Baraki, Beti, and Manuela Lupton-Paez. 2021. ‘Refuge Social Return on Investment: Updated Model Findings’. https://refuge.org.uk/wp-content/uploads/2022/05/NEF-Consulting-Refuge-updated-SROI-Model-Report-FINAL_16.03.21.pdf
Bertotti, Marcello, and Oiatillo Temirov. 2020. ‘Outcome and Economic Evaluation of City and Hackney Social Prescribing Scheme’. https://repository.uel.ac.uk/download/7302de286252b90e6dbf1cd99583036c3519b33a21f72732b488c26bde483d12/759615/Evaluation%20of%20social%20prescribing%20in%20Hackney%20and%20City%20-%20UEL%20final%20sub%20.pdf
Bishop, Richard C., and Kevin J. Boyle. 2019. ‘Reliability and Validity in Nonmarket Valuation’. Environmental and Resource Economics 72 (2): 559–82. https://doi.org/10.1007/s10640-017-0215-7
Bretherton, Joanne, and Nicholas Pleace. 2015. ‘Housing First in England: An Evaluation of Nine Services’. https://eprints.whiterose.ac.uk/83966/1/Housing\_First\_England\_Report\_February\_2015.pdf
Byrne, Anne T., and David R. Just. 2023. ‘What Is Free Food Worth? A Nonmarket Valuation Approach to Estimating the Welfare Effects of Food Pantry Services’. American Journal of Agricultural Economics 105 (4): 1063–87. https://doi.org/10.1111/ajae.12355
Cabinet Office. 2012. ‘A Guide to Social Return on Investment’. https://socialvalueuk.org/wp-content/uploads/2023/01/The-Guide-to-Social-Return-on-Investment-2015-2.pdf
Cardoso, Frederico, and Zoë McHayle. 2022. ‘The Economic and Social Costs of Mental Ill Health’. https://www.centreformentalhealth.org.uk/wp-content/uploads/2024/03/CentreforMH_TheEconomicSocialCostsofMentalIllHealth.pdf.
Centre for Economics and Business Research (CEBR). 2014. ‘The Inactivity Time Bomb: The Economic Cost of Physical Inactivity in Young People’. https://www.sportsthinktank.com/uploads/the-inactivity-timebomb---streetgames---cebr-report---april-2014---28032014.pdf.
Champ, Patricia A., Kevin J. Boyle, and Thomas C. Brown, eds. 2017. A Primer on Nonmarket Valuation. Vol. 13. The Economics of Non-Market Goods and Resources. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-007-7104-8
Clifford, K Ward, R Coram, and C Ross. 2014. ‘Transforming Lives: A Review of the Social Impact of Debt Advice for UK Individuals and Families, Evaluated Using SROI’. https://www.stepchange.org/Portals/0/documents/media/reports/Transforming_lives.pdf
Cnaan, Ram A. 2009. ‘Valuing the Contribution of Urban Religious Congregations’. Public Management Review 11 (5): 641–62. https://doi.org/10.1080/14719030902798305
Corry, Dan. 2018. ‘The Problem with Wellbeing Valuation’. What Works Wellbing (blog). 2018. https://whatworkswellbeing.org/blog/wellbeing-should-we-really-be-using-it-to-monetise-non-market-activities/
Courtney, Paul. 2014. ‘The Local Food Programme: A Social Return on Investment Approach’. https://eprints.glos.ac.uk/2637/1/The%20Local%20Food%20programme%20-%20A%20Social%20Return%20on%20Investment%20Approach.pdf
Courtney, Paul. 2018. ‘Conceptualising Social Value for the Third Sector and Developing Methods for Its Assessment’. VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations 29 (3): 541–57. https://doi.org/10.1007/s11266-017-9908-3
Crisp, Richard, and Chris Dayson. 2011. ‘Delivering Outcomes for the Local Voluntary and Community Sector: An Evaluation of the Value for Money of MVDA’s Work’. https://shura.shu.ac.uk/26967/1/delivering-outcomes-local-vcs.pdf
Davies, Larissa, Elizabeth Christy, Girish Ramchandani, and Peter Taylor. 2018. ‘Social Return on Investment of Sport and Physical Activity in England’. https://sportengland-production-files.s3.eu-west-2.amazonaws.com/s3fs-public/2020-09/Social%20return%20on%20investment.pdf?5BgvLn09jwpTesBJ4BXhVfRhV4TYgm9E
Davies, Larissa E., Peter Taylor, Girish Ramchandani, and Elizabeth Christy. 2021. ‘Measuring the Social Return on Investment of Community Sport and Leisure Facilities’. Managing Sport and Leisure 26 (1–2): 93–115. https://doi.org/10.1080/23750472.2020.1794938
Day, Brett, and Greg Smith. 2018. ‘Outdoor Recreation Valuation (ORVal) User Guide’. Land, Environment, Economics and Policy (LEEP) Institute, Business School, University of Exeter. https://www.leep.exeter.ac.uk/orval/pdf-reports/ORVal2_User_Guide.pdf
Department for Education. 2021. ‘Measuring the Net Present Value of Further Education in England 2018 to 2019’. GOV.UK. 2021. https://www.gov.uk/government/publications/measuring-the-net-present-value-of-further-education-in-england-2018-to-2019
Department for Environment, Food & Rural Affairs. 2024. ‘United Kingdom Food Security Report 2024: Theme 4: Food Security at Household Level’. GOV.UK. 2024. https://www.gov.uk/government/statistics/united-kingdom-food-security-report-2024/united-kingdom-food-security-report-2024-theme-4-food-security-at-household-level
Department for Work & Pensions. 2022. ‘Cost Benefit Analysis Technical Report’. GOV.UK. 2022. https://www.gov.uk/government/publications/an-evaluation-of-the-group-work-jobs-ii-trial/cost-benefit-analysis-technical-report
Dolan, Paul, and Daniel Fujiwara. 2012. ‘Valuing Adult Learning: Comparing Wellbeing Valuation to Contingent Valuation’. https://assets.publishing.service.gov.uk/media/5a78b76540f0b632476999bc/12-1127-valuing-adult-learning-comparing-wellbeing-to-contingent.pdf
Dolan, Paul, Richard Layard, and Robert Metcalfe. 2011. ‘Measuring Subjective Well-Being for Public Policy’. https://eprints.lse.ac.uk/35420/1/measuring-subjective-wellbeing-for-public-policy.pdf
Dolan, Paul, and Robert Metcalfe. 2008. Comparing Willingness-to-Pay and Subjective Well-Being in the Context of Non-Market Goods. London: Centre for Economic Performance, London School of Economics and Political Science. https://cep.lse.ac.uk/pubs/download/dp0890.pdf
Dolan, Paul, Tessa Peasgood, and Mathew White. 2008. ‘Do We Really Know What Makes Us Happy? A Review of the Economic Literature on the Factors Associated with Subjective Well-Being’. Journal of Economic Psychology 29 (1): 94–122. https://doi.org/10.1016/j.joep.2007.09.001
Dowrick, Anna, Meredith Hawking, and Estela Capelas Barbosa. 2022. ‘The Social Value of Improving the Primary Care Response to Domestic Violence and Abuse: A Mixed Methods Social Return on Investment Analysis of the IRIS Programme’. https://irisi.org/wp-content/uploads/2022/11/The-social-value-of-improving-the-primary-care-response-to-domestic-violence-and-abuse.pdf
Drummond, M.F., M.J. Sculpher, K. Claxton, G.L. Stoddart, and G.W. Torrance. 2015. Methods for the Economic Evaluation of Health Care Programmes. Oxford: Oxford University Press.
Ecorys. 2017. ‘Evaluation of GoodGym’. https://media.nesta.org.uk/documents/good\_gym\_evaluation.pdf
Eisenberg, Daniel, and David Hutton. 2016. ‘Estimating the Return on Investment for Boys & Girls Clubs’. https://www.flabgc.org/wp-content/uploads/2023/02/2017-University-of-Michigan.pdf
Envoy Partnership. 2014. ‘A Social Return On Investment (SROI) Analysis of Community Champions Tri-Borough Public Health’. https://www.pdt.org.uk/wp-content/uploads/2019/10/FullSROIreportCommunityChampions_No_Appendices_FINAL-1.pdf
ERS Research & Consultancy. 2018. ‘Evaluation of MyBnk Money Works’. https://mybnk.org/wp-content/uploads/2018/09/Money-Works-Evaluation-Full-Report-June-2018.pdf
Europe Economics. 2018. ‘The Economic Impact of Debt Advice: A Report for the Money Advice Service’. https://masassets.blob.core.windows.net/cms/files/000/000/898/original/Economic\_Impact\_of\_Debt\_Advice\_-\_main\_report.pdf
European Training Foundation. 2024. ‘CSOs in Human Capital Development and Lifelong Learning – Thematic Paper 2024’. 2024. https://www.etf.europa.eu/en/publications-and-resources/publications/csos-human-capital-development-and-lifelong-learning
Frey, Ulrich J., and Frauke Pirscher. 2019. ‘Distinguishing Protest Responses in Contingent Valuation: A Conceptualization of Motivations and Attitudes behind Them’. PloS One 14 (1): e0209872. https://doi.org/10.1371/journal.pone.0209872
Frew, Emma J, Mobeen Bhatti, Khine Win, Alice Sitch, Anna Lyon, Miranda Pallan, and Peymane Adab. 2014. ‘Cost-Effectiveness of a Community-Based Physical Activity Programme for Adults (Be Active) in the UK: An Economic Analysis within a Natural Experiment’. British Journal of Sports Medicine 48 (3): 207–12. https://doi.org/10.1136/bjsports-2012-091202
Frijters, Paul, and Christian Krekel. 2021. A Handbook for Wellbeing Policy-Making: History, Theory, Measurement, Implementation, and Examples. 1st ed. Oxford University PressOxford. https://doi.org/10.1093/oso/9780192896803.001.0001
Frontier Economics and UK Youth. 2022. ‘The Economic Value of Youth Work’. https://www.ukyouth.org/wp-content/uploads/2022/09/Economic-Value-of-Youth-Work-Final-260822-STC-clean75-1.pdf
Fujiwara, Daniel. 2013. ‘A General Method for Valuing Non-Market Goods Using Wellbeing Data: Three-Stage Wellbeing Valuation’. https://eprints.lse.ac.uk/51577/1/dp1233.pdf
Fujiwara, Daniel, Kieran Keohane, Vicky Clayton, and Ulrike Hotopp. 2021. ‘Mental Health and Life Satisfaction: The Relationship between the Warwick Edinburgh Mental Wellbeing Scale and Life Satisfaction - A Pilot Study’. https://hact.org.uk/wp-content/uploads/2021/11/MentalHealth_and_LifeSatisfaction_web.pdf
Fujiwara, Daniel, Kieran Keohane, Vicky Clayton, and Cem Maxwell. 2017. ‘Measuring Social Impact - The Technical Reference Study’. https://asvb-media.s3.amazonaws.com/uploads/2017/08/20170801-ASVB-Technical-Reference-Paper.pdf
Fujiwara, Daniel, and Jim Vine. 2015. ‘The Wellbeing Value of Tackling Homelessness’. https://cdn.clarionhg.com/-/jssmedia/clarion-housing-group/documents/reports/research-news/the-wellbeing-value-of-tackling-homelessness.pdf?rev=6c91d23b879b477f989dc6e313e05132
Gasper, Joseph, Ben Muz, and Dawn Boyer. 2020. ‘Return on Investment Analysis of Industry-Focused Job Training Programs’. https://www.nyc.gov/assets/opportunity/pdf/evidence/training\_roi\_report\_final.pdf
Goodspeed, Tim. 2009. ‘Forecast of Social Return on Investment of Workwise Activities (April 2009 to March 2010)’. https://socialvalueuk.org/wp-content/uploads/2023/05/SROI-Report-Workwise-Oct-09.pdf
Goodspeed, Tim. 2014. ‘Value of Substance: A Social Return on Investment Evaluation of Turning Point’s Substance Misuse Services in Wakefield’. https://socialvalueuk.org/wp-content/uploads/2016/03/Value%20of%20Substance%20FINAL.pdf
Granger, Rachel, Ned Hartfiel, Victory Ezeofor, Katherine Abba, Rhiannon Corcoran, Rachel Anderson de Cuevas, Ben Barr, et al. 2024. ‘Social Return on Investment (Sroi) Evaluation of Citizen’s Advice on Prescription: A Whole Systems Approach to Mitigate Poverty and Improve Wellbeing’. https://www.preprints.org/manuscript/202410.2454/v1
Green, Donald, Karen E. Jacowitz, Daniel Kahneman, and Daniel McFadden. 1998. ‘Referendum Contingent Valuation, Anchoring, and Willingness to Pay for Public Goods’. Resource and Energy Economics 20 (2): 85–116. https://doi.org/10.1016/S0928-7655(97)00031-6
Hartfiel, Ned, Heli Gittins, Val Morrison, Sophie Wynne-Jones, Norman Dandy, and Rhiannon Tudor Edwards. 2023. ‘Social Return on Investment of Nature-Based Activities for Adults with Mental Wellbeing Challenges’. International Journal of Environmental Research and Public Health 20. https://doi.org/10.3390/ijerph20156500
Hatch Regeneris. 2020. ‘We Mind The Gap: Social Return on Investment (SROI) Assessment’. https://www.tnlcommunityfund.org.uk/media/insights/documents/Social-Return-on-Inventment-WMTG3-WeMindTheGap1.pdf?mtime=20220707135222&focal=none
HM Treasury. 2021. ‘Wellbeing Guidance for Appraisal – Supplementary Green Book Guidance’. https://assets.publishing.service.gov.uk/media/60fa9169d3bf7f0448719daf/Wellbeing\_guidance\_for\_appraisal\_-\_supplementary\_Green\_Book\_guidance.pdf
HM Treasury. 2022. The Green Book: Central Government Guidance on Appraisal and Evaluation. https://www.gov.uk/government/publications/the-green-book-appraisal-and-evaluation-in-central-government/the-green-book-2020
Indecon. 2012. ‘Assessment of the Economic Value of Youth Work’. https://www.youth.ie/wp-content/uploads/2019/01/Economic-Beneifit-Youthwork-2012.pdf
Johnsen, Sara, Janice Blenkinsopp, and Matt Rayment. 2023. ‘Housing First Pathfinder Scotland: Final Evaluation Report’. https://researchportal.hw.ac.uk/files/65371618/PathfinderEvaluation\_FinalReport\_Full.pdf
Knapp, Martin, David McDaid, and Michael Parsonage. 2011. ‘Mental Health Promotion and Mental Illness Prevention: The Economic Case’. https://assets.publishing.service.gov.uk/media/5a7bad75ed915d1311060cb2/dh_126386.pdf
Koning, Radboud, Enno Gerdes, Peter van Eldert, and Paul Hover. 2022. ‘SROI Sport and Physical Activity 2022’. https://www.kennisbanksportenbewegen.nl/?file=10836&m=1655207694&action=file.download
Lawlor, Eilis, Niamh Bowen, and James Richardson. 2019. ‘A Social Return on Investment Analysis for Tinder Foundation’. https://www.justeconomics.co.uk/uploads/reports/Just-Economics-Good-Things-Foundation-SROI.pdf
Lawlor, Eilis, and Eva Neitzert. 2021. ‘Healthy Clubs, Healthy Bodies, Healthy Minds. Measuring the Impact of the Irish Life GAA Healthy Clubs Programme.’ https://www.gaa.ie/api/pdfs/image/upload/jw0wrlcisvlt9kft3kcc.pdf
Lawton, Ricky, Daniel Fujiwara, Susana Mourato, Hasan Bakhshi, Augustin Lagarde, and John Davies. 2018. ‘The Economic Value of Heritage: A Benefit Transfer Study’. Nesta and Simetrica. https://pec.ac.uk/wp-content/uploads/2024/07/Cathedrals_and_Historic_Cities_report_Nesta_and_Simetrica_021018.pdf
Local Government Association. 2023. ‘Employment and Skills Provision Survey 2022/23’. 1 March 2023. https://www.local.gov.uk/publications/employment-and-skills-provision-survey-202223
Loubière, Sandrine, Owen Taylor, Aurelie Tinland, Maria Vargas-Moniz, Branagh O’Shaughnessy, Anna Bokszczanin, Hakan Kallmen, et al. 2020. ‘Europeans’ Willingness to Pay for Ending Homelessness: A Contingent Valuation Study’. Social Science & Medicine 247 (February):112802. https://doi.org/10.1016/j.socscimed.2020.112802.
McDaid, David, A-La Park, and Martin Knapp. 2017. ‘Commissioning Cost-Effective Services for Promotion of Mental Health and Wellbeing and Prevention of Mental Ill-Health’. https://www.lse.ac.uk/business/consulting/assets/documents/commissioning-cost-effective-services-for-promotion-of-mental-health-and-wellbeing-and-prevention-of-mental-ill-health.pdf
McManus Sally, Paul Bebbington, Rachel Jenkins, Traolach Brugha (eds.) 2016. ‘Mental health and wellbeing in England: Adult Psychiatric Morbidity Survey 2014’. Leeds: NHS Digital. https://assets.publishing.service.gov.uk/media/5a802e2fe5274a2e8ab4ea71/apms-2014-full-rpt.pdf
Ministry of Housing, Communities and Local Government. 2024. ‘Housing First Pilots: Cost Benefit Analysis’. https://assets.publishing.service.gov.uk/media/671a6fea603993b7a8f75db5/Housing\_First\_Cost\_Benefit\_Analysis\_Report.pdf
New Economics Foundation. 2013. ‘Economics in Policy-Making: Social CBA and SROI’. https://www.nefconsulting.com/wp-content/uploads/2014/10/Briefing-on-SROI-and-CBA.pdf
New Economics Foundation. 2017. ‘The Cost of Loneliness to UK Employers’. https://neweconomics.org/uploads/files/NEF_COST-OF-LONELINESS_DIGITAL-Final.pdf
Nguyen, Tan Minh, Robert Witton, Lyndsey Withers, and Martha Paisi. 2024. ‘Economic Evaluation of a Community Dental Care Model for People Experiencing Homelessness’. British Dental Journal, December. https://doi.org/10.1038/s41415-024-8166-1
NHS Scotland. 1998. ‘A Guide to Quality Adjusted Life Years (QALYs)’. https://scottishmedicines.org.uk/media/2839/guide-to-qalys.pdf
Nolden, Colin, Daniela Rossade, and Peter Thomas. 2021. ‘Capturing the Value of Community Fuel Poverty Alleviation’. https://www.bristol.ac.uk/media-library/sites/law/research/Nolden%20et%20al.%20BLRP%20No.%202%202021.pdf
Oliver, Rhys, Barnaby Alexander, Stephen Roe, and Miram Wlasny. 2019. ‘The Economic and Social Costs of Domestic Abuse’. https://assets.publishing.service.gov.uk/media/5f637b8f8fa8f5106d15642a/horr107.pdf
Orlowski, Johannes, and Pamela Wicker. 2018. ‘Putting a Price Tag on Healthy Behavior: The Monetary Value of Sports Participation to Individuals’. Applied Research in Quality of Life 13 (2): 479–99. https://doi.org/10.1007/s11482-017-9536-5
Oxera. 2013. ‘Impact of Centrepoint’s Intervention for Homeless Young People: A Cost–Benefit Analysis’. https://www.oxera.com/wp-content/uploads/2018/07/Impact-of-Centrepoint-intervention.pdf.pdf
Oxford Economics. 2024. ‘The British Gas Energy Trust: Alleviating the Impact of Fuel Poverty’. https://britishgasenergytrust.org.uk/wp-content/uploads/2024/05/British-Gas-Energy-Trust-Alleviating-the-Impact-of-Fuel-Poverty.pdf
Pearce, David, and Ece Özdemiroglu. 2002. ‘Economic Valuation with Stated Preference Techniques’, March. https://assets.publishing.service.gov.uk/media/5a750ff740f0b6397f35d5f5/Economic\_valuation\_with\_stated\_preference\_techniques.pdf
Peytrignet, Sebastien, Simon Garforth-Bles, Kieran Keohane, and Simetrica Jacobs. 2020. ‘Loneliness Monetisation Report’. https://assets.publishing.service.gov.uk/media/602fcb91d3bf7f72154fabc3/Loneliness_monetisation_report_V2.pdf
Pleace, Nicholas, and Dennis P. Culhane. 2016. ‘Better than Cure? Testing the Case for Enhancing Prevention of Single Homelessness in England’. https://www.crisis.org.uk/media/20680/crisis\_better\_than\_cure\_2016.pdf
PwC. 2018. ‘Assessing the Costs and Benefits of Crisis’ Plan to End Homelessness’. https://www.crisis.org.uk/media/238957/assessing\_the\_costs\_and\_benefits\_of\_crisis-\_plan\_to\_end\_homelessness\_2018.pdf
Prieto, Luis, and José A Sacristán. 2003. ‘Problems and Solutions in Calculating Quality-Adjusted Life Years (QALYs)’. Health and Quality of Life Outcomes 1 (December):80. https://doi.org/10.1186/1477-7525-1-80
Pritchard, David, Peter O’Flynn, James Noble, Meera Craston, and Stella Capuano. 2021. ‘Evaluation of Coronavirus Community Support Fund: Value for Money Report’. https://www.tnlcommunityfund.org.uk/media/insights/documents/CCSF-Impact-Eval_Final_Report.pdf
Rolfe, John, Robert J. Johnston, Randall S. Rosenberger, and Roy Brouwer. 2015. ‘Introduction: Benefit Transfer of Environmental and Resource Values’. In Benefit Transfer of Environmental and Resource Values: A Guide for Researchers and Practitioners, edited by Robert J. Johnston, John Rolfe, Randall S. Rosenberger, and Roy Brouwer, 3–17. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-017-9930-0\_1
Rogers, Michaela, Mark Wilding, and Annie Wood. 2018. ‘An Evaluation of the Change Up Programme’. https://salford-repository.worktribe.com/OutputFile/1490415
Rosenberger, Randall S., and Tom D. Stanley. 2006. ‘Measurement, Generalization, and Publication: Sources of Error in Benefit Transfers and Their Management’. Ecological Economics, Environmental Benefits Transfer: Methods, Applications and New Directions, 60 (2): 372–78. https://doi.org/10.1016/j.ecolecon.2006.03.018
Roussos, Konstantinos, Julius Schneider, Rebecca Warren, and Jayne Jennett. 2024. ‘The Bringing Community Supermarkets to Essex Programme: A “Social Return on Investment” Approach on Sustainable Community Assets for Social Support and Care’. https://repository.essex.ac.uk/39363/1/CSM%20Final%20Report.pdf
Royal Institution of Chartered Surveyors. 2020. ‘Measuring Social Value in Infrastructure Projects: Insights from the Public Sector’. https://www.rics.org/content/dam/ricsglobal/documents/to-be-sorted/measuring-social-value_1st-edition.pdf
Saraev, Vadim, Liz O’Briend, Gregory Valatin, and Matthew Bursnell. 2021. ‘Valuing the Mental Health Benefits of Woodlands’. https://www.forestresearch.gov.uk/publications/valuing-the-mental-health-benefits-of-woodlands/
Sayman, Serdar, and Ayşe Öncüler. 2005. ‘Effects of Study Design Characteristics on the WTA–WTP Disparity: A Meta Analytical Framework’. Journal of Economic Psychology 26 (2): 289–312. https://doi.org/10.1016/j.joep.2004.07.002
Social Value Lab. 2014. ‘Social Return on Investment of Citizens Advice Direct’. https://socialvalueuk.org/wp-content/uploads/2023/05/citizens-advice-direct-sroi-report.pdf
Standing Together and Solace Women’s Aid. 2022. ‘Westminster VAWG Housing First Service Second Year Evaluation’. https://housingfirsteurope.eu/wp-content/uploads/2023/05/Year2Evaluation\_Westminster\_VAWG\_HousingFirst.pdf
The Money Charity. 2024. ‘The Money Charity’s Money Statistics Report October 2024’. The Money Charity. 2024. https://themoneycharity.org.uk/money-statistics/october-2024/
The Sainsbury Centre for Mental Health. 2007. ‘Mental Health at Work: Developing the Business Case’. http://www.mentalhealthpromotion.net/resources/mental_health_at_work_developing_the_business_case.pdf
Turner, Hugo C., Rachel A. Archer, Laura E. Downey, Wanrudee Isaranuwatchai, Kalipso Chalkidou, Mark Jit, and Yot Teerawattananon. 2021. ‘An Introduction to the Main Types of Economic Evaluations Used for Informing Priority Setting and Resource Allocation in Healthcare: Key Features, Uses, and Limitations’. Frontiers in Public Health 9 (August):722927. https://doi.org/10.3389/fpubh.2021.722927
Walk, Marlene, Itay Greenspan, Honey Crossley, and Femida Handy. 2015. ‘Social Return on Investment Analysis: A Case Study of a Job and Skills Training Program Offered by a Social Enterprise’. Nonprofit Management and Leadership 26 (2): 129–44. https://doi.org/10.1002/nml.21190
YouthLink Scotland. 2016. ‘Social and Economic Value of Youth Work in Scotland: Initial Assessment Report’. https://socialvalueuk.org/wp-content/uploads/2023/05/YouthLink-Scotland-final-report.pdf
Appendix 1: REA Protocol
Search strategy
For the REA, the research team conducted two separate searches for each database, presented in Tables 1 and 2. This was due to the very large number of academic studies discussing QALYs or WELLBYs (particular in the field of “Medicine and public health”).
Table A1: List of keywords for all methodologies except for those related to adjusted life years
List of keywords | |
---|---|
Keyword 1 – Civil society organisations | Social work, social service*, civil society, charity, social enterprise, VCS*, VCFS, third sector, non-profit (nonprofit, non profit) |
Keyword 2 – Valuation methods | contingent valuation, wellbeing valuation, hedonic pricing, travel cost method, choice modelling |
Table A2: List of keywords for methodologies related to adjusted life years
List of keywords | |
---|---|
Keyword 1 – Civil society organisations | Social work, social service*, civil society, charity, social enterprise, VCS*, VCFS, third sector, non-profit (nonprofit, non profit) |
Keyword 2 – Valuation methods | QALY*, WELLBY* |
Inclusion and exclusion criteria
The research team also applied the following inclusion and exclusion criteria, set out in Table 3, to decide if the studies identified were suitable for answering the core research questions of the project.
Inclusion and exclusion criteria
Theme | Inclusion Criteria | Exclusion Criteria |
Country of study | UK, US, Canada, New Zealand, Australia, comparable EU countries (if English-language research is available) | Non-comparable jurisdictions, e.g. in Africa, Asia and Latin America |
Focus of study/subjects covered | Economics, Econometrics and Finance, Business, Social Sciences and Humanities, Public Health | N/A |
Date of research | 2000 to 2025 (Table A1) 2019 to 2025 (Table A2) |
Studies outside of date ranges (except for key studies identified through snowballing) |
Language | English | Any other language |
Type of studies | Peer-reviewed journal articles, non-peer-reviewed academic outputs (e.g. working studies), government-commissioned research, publications by research organisations (including consultancies), evidence by service providers, and government publications. | News articles and editorials/opinion pieces, magazine articles, theses and dissertations, book chapters, and books or other works of equivalent length. |
Appendix 2: Quality Assessment
Overview
The research team adopted four different established frameworks to assess the quality of each study in the REA: Krlev et al. (2013) for SROI analyses, HMT Green Book for cost-benefit analyses, Carson (2000) for contingent valuation and the Nesta standards of evidence for all other studies (including wellbeing valuation and travel cost). Each study was assigned a score of 0 or 1 for each question in the relevant quality framework, with the total score used to determine the overall quality of the study.
- Studies scoring above 70% were classified as high-quality.
- Studies scoring between 40-70% were classified as medium-quality.
- Studies scoring lower than 40% were classified as low quality.
The specific questions included in each framework are as follows:
Cost-benefit analysis
- Does the study justify why the particular methodology was chosen?
- Does the study describe the methods for estimating costs/benefits and justify these methods?
- Does the study include a theory of change or logic model to structure the CBA framework?
- Does the cost-benefit analysis include a discussion of opportunity costs?
- Does the cost-benefit analysis factor in participant drop-off when measuring benefits over time?
- Does the analysis consider the counterfactual, i.e. the Business as Usual scenario?
- Does the study reference the use of discounting if benefits are measured over time?
- Does the study include a discussion of key sources of uncertainty, and how these impact the cost-benefit analysis?
- Does the study include any sensitivity testing for the cost-benefit analysis?
- Does the study try to explain the impact on the cost-benefit analysis estimates if key assumptions change?
SROI
- Does the study explain why SROI was chosen as the methodology?
- Does the study describe the methods for measuring specific outcomes and justify these methods?
- Does the study include an impact map diagram?
- Does the study include a control group for baseline comparison?
- Does the study include any comparison of outcomes before/after the intervention or programme?
- Does the study include a discussion explaining why specific benefits were chosen for the SROI exercise?
- If specific benefits cannot be directly measured, does the study include a discussion explaining the proxy variables chosen to quantify instead?
- Does the study report qualitative estimates of the benefits of the intervention or programme?
- Does the study report quantitative estimates of the benefits of the intervention or programme?
- Does the study explain the limitations of the SROI methodology in capturing the social impact of the intervention/programme?
- Does the study include a discussion about how to interpret the SROI ratio?
- Does the study include any sensitivity analysis for the SROI calculation (for example, by varying assumptions or using a different set of benefit values?)
Contingent valuation
- Does the study justify why the particular methodology was chosen?
- Does the study clearly set out how the methodology was applied?
- Does the study include good documentation around survey design and deployment?
- Does the study include a careful user testing procedure?
- Does the study deploy a well-specified and explained regression to estimate the willingness to pay for the sector/service?
- If any outliers are found, does the study address their treatment?
- Does the study perform sensitivity analysis to test the robustness of the findings to variations in assumptions?
Other methodologies
- Does the study justify why the particular methodology was chosen?
- Does the study consider variables that are valid, accepted and standard in the literature, and comprehensive?
- Does the study discuss the limitations of the method chosen in the context of the particular problem addressed?
- Does the study discuss the significance of the findings given the research context?
- Does the study perform sensitivity analysis to different sets of assumptions?
Appendix 3: PRISMA Diagram
-
Several other studies incorporated QALY as part of a broader methodology. However, only one study in the REA, Nguyen et al. (2024), solely used QALY to develop an estimated value. ↩
-
While not specifically stated in these three studies, the cost savings estimated all represent savings on government-provided services and can therefore be interpreted as Exchequer savings. ↩
-
In Pleace and Culhane (2016), the study is specifically caveated by the authors as being an “exploratory study, based partially on estimation” (pg 1). ↩
-
Benefit transfer refers to the use of estimates of nonmarket value from prior research “when new, original research is not feasible given time and benefit constraints” (Champ et al. 2017). ↩
-
Outcomes related to economic output included earnings premium (from avoided unemployment, a higher-level GCSE or obtaining a degree), staff turnover, absenteeism and productivity. ↩
-
Construct validity refers to the effectiveness of contingent valuation in measuring the economic value of a specific CSO-provided service. Content validity refers to whether contingent valuation captures all dimensions of value that users of CSO-provided services might place on the service (Pearce and Özdemiroglu 2002). ↩
-
To a certain extent, the goals of wellbeing valuation align with the goals of cost-benefit analysis and social return on investment in that all three are “welfarist” approaches (Turner et al. 2021). ↩
-
One potential approach using revealed preference methodology is using the market prices of services provided by private sector businesses to estimate the value of similar services provided by CSOs (for example, both for-profit and not-for-profit organisations provide financial advice services and job training). However, this approach could not be used to estimate the additional value (if any) derived from the service being provided by civil society. ↩
-
These criteria are set out in Bergland et al. (2002) and Boyle et al. (2010). ↩
-
More specifically, asking survey respondents their WTP for free food would likely be confusing. In addition, respondents may answer 11 ↩
-
WTP questions based on the liquidity constraints they currently face, underestimating the true value they would place on access to food (Byrne and Just 2023). ↩