Research and analysis

Tampon Tax Fund evaluation design report

Published 10 February 2023

Introduction

This section sets out the background and context to the Tampon Tax Fund evaluation. This included identifying the key research questions, evaluation objectives and activities delivered as part of the scoping phase methodology.

About this document

This document presents the findings from the Tampon Tax Fund (TTF) evaluation scoping phase and presents the proposed approach to delivering an impact and process evaluation for the Fund.

Tampon Tax Fund background and context

The purpose of the Tampon Tax Fund was to allocate the funds generated from the VAT on sanitary products to projects that improve the lives of disadvantaged women and girls across the UK. The Fund was announced by the then Chancellor during the 2015 Autumn Statement. EU law had prohibited Member States from applying a zero rate of VAT to women’s sanitary products. The UK therefore applied the lowest rate it could to these products (5%) and the government took the initiative in 2016 to introduce legislation to enable a zero rate to take effect as soon as legally possible. From 2015/16, the taxation income from sanitary products has been awarded through fair and open competition to not-for-profit organisations supporting women and girls across the UK.

Since its inception and initial roll out in 2015/2016, the TTF has provided six rounds of funding for not-for-profit organisations in the UK to deliver projects focusing on the challenges faced by women and girls, particularly those who are vulnerable and/or from minority groups. Each annual funding round targeted projects focused on violence against women and girls (VAWG) and other specific annual themes, such as music, mental health and wellbeing, young women’s mental health, rough sleeping and homelessness. The Fund also included a ‘general category’ open to projects that improve the lives of vulnerable, disadvantaged or underrepresented women and girls.

The 2021 to 2022 funding year is the sixth and final round of the TTF. Between 2015 and 2023 the TTF will have provided up to £86.25 million via a total of 137 grants awarded. [footnote 1]:.

TTF evaluation

In February 2022, the Department of Culture, Media & Sport (DCMS) commissioned Kantar Public to conduct an evaluation of the TTF. The evaluation was split into an initial scoping phase used to assess the feasibility of conducting a robust impact evaluation of the current TTF round and, if viable, scope out a potential evaluation design. Should such an evaluation be deemed feasible, a full evaluation would then be undertaken in line with this design.

This report is the product of the initial scoping phase, the composition of which is described below.

Evaluation scoping phase

The scoping phase of this project was designed to explore if and how an impact and process evaluation could be conducted for the TTF, in line with the following government priorities and evaluation questions.

  • Experiences of delivery: How was TTF delivered and experienced (by stakeholders, grant-holders and end-users)?

  • Which organisations received TTF money; how was it used; and whom did it reach/whom did it not reach and why?

  • How far was it delivered as intended; what delivery changes, including improvements, were made, and why? Possible changes could include: development of further criteria; application threshold dropped; distribution to devolved administrations.

  • What effect did the external contextual factors over time have on delivery?

  • Overall, what worked well and less well?

  • Unintended consequences: What, if any, were the unintended consequences of the programme?

  • Impact: What impact (actual/perceived depending on type of evaluation) did TTF funding have on grantees and end users/beneficiaries?

The objectives of the evaluation scoping phase were therefore:

1. To understand the context, design and delivery mechanisms of the TTF.

2. To develop a Logic Model for the TTF.

3. To assess the feasibility and explore options for an impact evaluation of the current TTF programme.

4. To understand key contexts and considerations for the evaluation of the TTF.

5. To design a process and impact evaluation for the TTF.

Scoping methodology

The evaluation scoping phase comprised four distinct stages, each with multiple components. These stages and activities are summarised below. The impact evaluation feasibility assessment is a key element of the first three stages, and happens along other activities outlined.

1. Information gathering

  • bid assessment & evidence audit

  • stakeholder interviews (x20-25) with Central Government stakeholders, external stakeholders and grantees

  • information analysis

2. Logic model

  • logic model design

  • logic model design workshop

  • final logic model amends

3. Research design

  • analysis & mapping of design inputs

  • feasibility assessment & final decision

  • impact evaluation design

  • process evaluation design

4. Design reporting

  • Design workshop

  • Analysis and design update

  • Evaluation design report

The first stage of scoping activity was the information gathering stage, used to develop a deeper understanding of the TTF’s origins, design and project portfolio. This stage included the following key activities designed to answer each of the stated objectives.

A review of successful TTF bid application forms

This activity comprised the review of 33 successful TTF bids, including all 14 funded projects from the 2021 to 2022 cohort and a sample of 19 successful bids from between 2015 and 2020.  The review focused on the current 14 projects (to inform the impact evaluation feasibility assessment) with the sample of past projects included to sense-check consistency and develop a more complete picture of overall Fund activity.

The aims and objectives of the bid review were:

  • to understand the scope of the TTF and its funded projects; and

  • to inform the development of the Logic Model and identify potential methods for impact

An initial rapid review of the current 14 projects was used to develop a bid analysis framework. The bid analysis framework is a bespoke tool designed to capture administrative information about:

  • the bids (location, funded amount, key contacts details of additional funding, funding stream of TTF, delivery timeline, brief description of activities)

  • the proposed interventions (including budget, intended audience/beneficiaries, mode of delivery, whether the intervention is new or existing, and expected numbers)

  • intended outcomes

  • plans for monitoring and evaluation

Once the framework was created and tested, a full review of the 33 bids took place, with the bids being distributed as follows: 14 bids from 2021-2022; 5 bids from 2020-2021; 4 bids from 2019-2020; 5 bids from 2018-2019; and 5 bids from 2017-2018.

The bid review and analysis provided a clear overview of the broad composition of TTF-funded projects from which to support subsequent design activities. Kantar Public recommend the remaining projects are incorporated into this review during the mainstage evaluation to provide a complete picture of the TTF portfolio.

In-depth interviews with key stakeholders

As part of the TTF scoping phase, Kantar Public conducted a total of 24 interviews with key stakeholders from three core groups: Central Government stakeholders, external stakeholders, and grantees. Details of the stakeholder groups, interview objectives and participants numbers are outlined in Table 1. The number of interviews delivered are in line with the original plan for delivery with the exception of the external stakeholder group. Here the initial intention was to deliver 3-5 interviews, however this was reviewed because of the significant overlap with grantee stakeholders, leaving few qualifying organisations. Interviews with past Fund recipients were therefore prioritised for interviews instead.

Table 1. Overview of key stakeholder groups

Stakeholder group Group description Interview objective Number of interviews
Central Government stakeholders Internal government department team members who have been involved in the design, set-up and/or delivery of the Fund. To provide insight into the Fund’s background, aims, types of projects, operations (such as bidding process and award criteria), and evaluation needs/expectations. 4
External stakeholders Staff members from organisations external to the government working in sectors that are relevant to the Fund. To develop a deeper understanding of local project nuance and operations, discuss the practicalities of evaluation (especially impact evaluation feasibility and access to data for this purpose). 2
Grantees The full cohort of organisations accessing the TTF in the current 2021/2022 funding round, as well as a sample of organisations that accessed the Fund in previous funding rounds. To provide wider context of the Fund and projects, stakeholders’ understanding of TTF (aims, activities, outcomes) and their evaluation needs. 14 current/ 4 previous Fund recipients
      24 interviews

The overarching aim of the interviews was to explore the contextual factors underpinning the Fund, bid application processes, and develop a greater understanding of interventions being delivered via the Fund.

Findings from the information gathering stage were subsequently used to develop the TTF Logic Model and design an appropriate evaluation plan.

Logic Model development, including a development workshop (Objective 2)

Kantar Public used the insights on the TTF intended logic and outcomes alongside the real activities and intended outcomes to develop a Logic Model (LM). Once created, a workshop with stakeholders was held to work through each element of the LM and agree on the different design options and preferred design approach.

An impact feasibility assessment

The feasibility assessment was conducted, in parallel with other elements of the scoping phase, by Kantar Public’s methods specialists. This included reviewing the bid analysis and stakeholder interview findings to understand the Fund and determine potential approaches to impact evaluation. This was supplemented by a review of publicly available data about the charity sector and further exploratory discussions with key stakeholders to understand the data available, as well as more detailed conversations with current projects to understand their evaluation plans and the potential for robust impact assessment.

Design options were discussed iteratively with DCMS, to understand which of the many potential approaches to impact evaluation would be preferable. This enabled the Kantar Public team to refine a discrete set of options as the scoping phase progressed. The findings from the feasibility assessment will directly inform the design of the evaluation.

Evaluation design and design workshop

Based on the activities above, Kantar Public has developed a recommended approach to the process and impact evaluation of the TTF. This was followed by a design where final adjustments were made to the proposed design based on feedback. This design report sets out in full the findings from the scoping phase and the recommended approach to TTF evaluation.

Stakeholder engagement

This section describes the findings from the interviews conducted with key stakeholders. Key stakeholders include individuals involved in TTF design and roll out (Central Government stakeholders), others working in sectors relevant to the Fund (external stakeholders) and those who accessed the Fund (grantees). The aim of the interviews was to get a better understanding of the TTF aims and structure, as well as evaluation priorities, considerations and challenges. Only the findings relating to evaluation priorities are presented in this report; the remaining feedback about the Fund design will be included in the final report only.

Evaluation priorities

Key stakeholder interviews also sought to understand what stakeholders and grantees wanted from a TTF evaluation.  Stakeholders agreed that the evaluation should focus on 3 key topics:

1. Understanding what was funded.

2. Capturing the outcomes and impact of the Fund.

3. Identifying key learnings and best practice.

Understanding who and what was funded

Already a stated objective of the evaluation, stakeholders (including those within government) wanted a better understanding of the types of organisations (for example size, generalist or specialist organisations) and projects that are funded (for example issues of focus). Some grantee stakeholders specifically wanted to know how much funding was given to specialist women’s and girls’ organisations compared to non-specialist organisations.

Capturing the outcomes and impact of the Fund

Capturing the outcomes and impact will be important, to celebrate what has been achieved and to show the value and need for continued investment in the women and girl’s sector. Stakeholders emphasised that this needs to go beyond individual and organisational outcomes to also understand the sector and policy-level outcomes. Some grantees also emphasised that women’s and girls’ voices should be reflected in the evaluation.

Identifying key learnings and best practice

Stakeholders wanted the evaluation to highlight the key learnings and best practices for project delivery and the grant making process. The evaluation was described as having the potential to be a learning tool for the sector, as well as government departments involved in grant giving. In terms of project delivery, this should include insights into how different projects were delivered, what worked well and what could be improved, as well as how outcomes were measured.

Furthermore, stakeholders wanted to understand what worked well and what could be improved in terms of the Fund structure and management approach. These learnings would provide insight into how to deliver ‘good’ grant giving in the women’s sector and what a ‘good’ fund looks like. Specific topics of interest were:

Resourcing: Capturing how much internal resources (for example time for reviewing bids) went into managing the process to inform future fund resourcing decisions.

Matched funding: Understanding whether a matched funding model is effective, as well as the barriers and facilitators to grantee’s ability to get matched funding.

Project sustainability: Understanding whether the Fund structure contributes to project sustainability after funding is finished.

Stakeholders also emphasised that knowledge dissemination activities would be important to ensure that the evaluation findings are shared and applied in the sector and government.

Evaluation considerations and challenges

While understanding the outcomes and impact is a key priority, stakeholders acknowledged that capturing this will be challenging. The key reasons for this are:

  • Types of projects funded:  The funded projects have different themes, delivery models, beneficiaries

  • Types of topics and participants involved: Some projects provide support to vulnerable groups, such as survivors of domestic abuse. The sensitivity of the topic, including the risk of re-traumatisation, needs to be considered alongside challenges with participant recruitment.

  • Data availability and quality: Data collection processes, procedures and capacity varies across organisations. Specifically, smaller organisations are more likely to have limited capacity for data collection. [footnote 3]:. The inconsistency and variation of data collection practices would impact whether outcomes can be captured accurately and reliably.

  • Timescales: The longer-term outcomes and impact will not be captured due to the timescales of the TTF and project delivery and due to the challenges of retrofitting an evaluation for a fund that has been running for 7 years.. This would particularly impact organisations implementing new projects which require more time to become fully operational (as opposed to building on and expanding project capacity and quality) and therefore have long timescales for set up.

Logic Model and narrative

This section presents the Logic Model and accompanying narrative for the Tampon Tax Fund (TTF).  This includes an introduction to Logic Model development in the context of evaluation, specific considerations framing the development of the TTF Logic Model as well as a detailed outlined of the model developed for the Fund.

Introduction to Logic Model

A Logic Model (LM) illustrates and describes how and why a programme is expected to achieve its specified outcomes and impacts. Key components are mapped out into a chain of results, and the interactions or logic between components are explored.

In the context of evaluation practice, a LM can guide and inform the scope and remit of an evaluation. For example, a LM’s identification of programme activities, deliverables, outcomes, and impacts can provide the foundations of an evaluation framework and plan. For this reason, developing a LM for the TTF was a key deliverable for this scoping phase. The LM has directly informed the design and development of the proposed TTF evaluation.

Key considerations

As noted in previous sections, many aspects of the TTF are unique and add a degree of complexity to the development of a LM and subsequent evaluation. The following key considerations were taken into account when developing this LM:

  • Key to the design and delivery of the TTF has been its broad remit. Interventions that have been funded vary greatly, as do the outcomes and impacts they are seeking to achieve. Fund recipients were encouraged to identify intended outcomes and impacts of their funded projects as part of the bidding process.

As a result, the LM has summarised the types of interventions, activities and outcomes that are being delivered overall, and the overall Fund-level logic – but it does not reflect the specific logic behind each project.

  • As the TTF was set up in 2015, many of the colleagues involved in the design of the Fund are no longer in post and have not been available to participate in stakeholder interviews or LM development workshops. The time that has passed may mean that recollection of Fund set-up decisions and rationale may also be limited.

  • The scoping phase bid review included a sample of past bids from across funding rounds. At this stage, a full bid review of all Fund recipient’s applications has not been conducted. While this review and discussions with DCMS suggest it is a fairly complete model, it will likely need to be updated during the mainstage evaluation to reflect additional elements that emerge as part of the full bid review.

TTF model overview

The LM has been developed to understand the approach taken by the TTF to meet its core aim(s) and intended impacts. The key components of the LM are as follows:

Introduction:  the overarching purpose, remit and underpinning contextual factors that have informed the overall design of the Fund.

Fund delivery models:  the models by which the funding has been rolled out and delivered to organisations to reach and benefit end beneficiaries (women and girls).

Interventions:  the categories of activities and interventions that Fund recipients delivered once they have received the funding in order to support women and girls.

Project defined outcomes and impacts:  the specific outcomes and impacts that individual projects are seeking to achieve, as defined by Fund recipients themselves. At a Fund level, no specific outcomes were mandated to Fund recipients.

Fund impact the overarching impacts that the Fund is aiming to achieve.

The LM focuses on Fund-level decisions, activities and approaches rather than at an individual funded project-level. That said, key to the Fund’s design was the decision not to mandate specific outcomes to bidders and instead to let Fund recipients define outcomes for their respective projects. As a result, the LM identifies project-level outcomes identified based on the review of bids.

Problem statement

The first section of the LM presents the key contexts framing and underpinning the TTF. These factors are presented as the first element of the LM because they inform the shape and design of the TTF and are integral context for the proceeding elements.

Key to the origin and purpose was the intention to keep the TTF focus areas and remit broad, therefore allowing organisations in the not-for-profit and community sectors to identify the key issues and challenges to be addressed.

Core themes were identified for each year of Fund delivery with the intention of ensuring a diversity of organisations and issues being funded, as well as to address current ministerial priorities. The range of themes being addressed by TTF over the course of its delivery were:

  • Violence against women and girls (VAWG)

  • Mental health and wellbeing

  • Young women’s mental health

  • Rough sleeping and homelessness

  • Music

  • General theme

Government also identified a priority of funding initiatives and interventions that supported disadvantaged women and girls.

The eligibility criteria specified that bid applications must be from registered charities or not-for-profit organisations, with proposed activity focused on women and girls (notably, organisations did not have to be specialists in the women’s and girl’s sector but be eligible for funding as long as their proposed activity benefitted women and girls). Organisations were required to apply for funding valuing at least £350,000 and with the bid not exceeding 50% of the organisation’s income. This value varied between some years of delivery, with initially no minimum criteria and then a £1 million minimum, before being changed to £350,000 in 2021.

Delivery models

The delivery model section of the LM presents the models by which the funding has been rolled out and delivered to organisations in order to reach and benefit end beneficiaries (women and girls). This is the logical next step in the chain, demonstrating the mechanisms by which the fund reaches fund recipients and end beneficiaries.

Projects followed one of four delivery models which were defined during the LM development process, based on who would be using the funding and who received the intervention.

Four models were identified as follows:

Model 1: an intervention delivered that is accessed directly by the end beneficiary. Here one organisation or a partnership of organisations put the funding towards providing frontline and direct support to women and girls.

Model 2: the delivery of intermediary capacity-building activities that subsequently support end beneficiaries. Here one organisation, or a partnership of organisations, puts the funding towards capacity-building activities such as training for professionals, or wider capacity-building activities such as shared learning spaces.

Model 3: funding is given to an intermediary grant maker which delivers an intervention that is accessed directly by the end-beneficiary. Here an intermediary organisation receives the TTF funding and then delivers their own tender process to provide funding to a wider set of organisations. These onward grant recipients then put the funding towards providing frontline support to women and girls.

Model 4:  funding is given to an intermediary grant maker which delivers intermediary capacity-building activities that subsequently support end-beneficiaries. Here an intermediary organisation receives the TTF funding, and then delivers their own tender process to provide funding to a wider set of organisations. These onward grant recipients then put the funding towards capacity-building activities.

Interventions

The third section presents the categorised types of interventions that Fund recipients deliver. This forms the next step in the logic; once the funding has been distributed via the above models, Fund recipients have delivered interventions that fall within and across the following categories.

7 main categories describe the ‘type’ of intervention funded by the TTF, as outlined below.

The first 3 of the 7 intervention types are identified as activities that can form the ‘intermediary’ stages of the delivery models outlined above, namely that they are activities not delivered directly to women and girls but which ultimately provide them with benefits. These are:

  • Onward grant giving:  funding received is redistributed as smaller grants to wider organisations, often with the same or more focused aims to TTF and with a focus on reaching small and medium sized community organisations without direct TTF access. Onward grant giving will always lead onto a wider set of interventions that would fall into one of the six other category types outlined below.
  • Training: training is provided to professionals, organisations and businesses, or people with lived experience to better equip them to support women and girls. Training provided to women and girls with lived experience is multi-purpose and may also be seen as an intervention that provides frontline support and education opportunities to end beneficiaries.
  • Capacity building:  consultancy and support is provided to an organisation to improve practices that will ensure better support is provided to women and girls. These interventions may include activities such as shared learning spaces and networks, support for implementing best practice, reviewing or rolling out organisational policy, provision of digital skills, etc.

The remaining four intervention types outlined below focus on delivery of frontline support to end beneficiaries (women and girls). These are:

  • Signposting and awareness-raising: signposting is provided to women and girls to raise awareness and improve access to support services. This intervention seeks to strengthen support pathways and increase access to support, as well as to enable joined-up working.
  • Frontline services: support and services are provided to women and girls. Examples of frontline services include the provision of 1:1 or group support, peer/community support, coaching/ mentoring, housing and accommodation, etc.
  • Education:  provision of education and/or training is delivered to support women and girls’ knowledge and skills development, for example language skills.
  • Resource equipment and tech: the provision of physical or digital equipment, tools or resources delivered to support women and girls. Examples of this could be providing bikes, sanitary products, online services, educational resources, digital content (including apps).

These first 3 intervention types may support the delivery of the remaining 4 intervention types, although the latter can also be delivered independently.

Fund recipients may deliver just one or multiple intervention types within their funded project.

Project-level outcomes and impacts

The fourth section of the LM presents the project-level outcomes and impacts.

A key part of the TTF design is its broad remit and the fact that no outcomes and impacts were prescribed at fund-level. Instead of fund-defined impacts and outcomes, bid applicants were asked to identify and define relevant project-level outcomes and impacts. As a result of this, and the varied nature of the types of interventions being delivered, the project-level outcomes and impacts are wide ranging. However, a review of project-level outcomes and impacts highlighted the most common types of outcomes across the projects; though not exhaustive, these showcase the main intentions of most projects.

Outcomes and impacts have been identified across 3 levels: sector, organisational and individual. Outcomes and impact also cut across short-term outcomes, long-term outcomes and project impacts.

Short-term outcomes are defined here as measurable outcomes directly attributed to intervention delivery and which are felt or experienced in the immediacy after the intervention is accessed or delivered.

Long-term outcomes are measurable outcomes directly attributed to intervention delivery that are felt or experienced in the long-term after the intervention is accessed or delivered for example, that you would expect to see/ feel experienced in the months after the intervention has ended).. Project impacts, in this context, are higher level goals that the project is expected to contribute towards, but that cannot be directly attributed to intervention delivery.

Sector-level outcomes and impacts

The first component is sector level, whereby outcomes and impacts are expected in the women’s sector and the wider not-for-profit sector overall. The main short-term outcome at the sector-level is improved sector partnership and organisational linking. In the longer-term this leads to an outcome of improved sector knowledge and increased assets, which projects expect to have an impact on the sector overall by supporting a shift in sector responses towards preventative work.

Organsational-level outcomes and impacts

The next component is the organisational-level, whereby TTF projects create outcomes and impacts for the funded organisations and their delivery partners. Projects identified two common short-term outcomes at the organisational level: improved ability/skill to support women in need and improved access to resources, guidance and shared learning (also an individual level outcome, see below). These short-term outcomes do not lead to any organisation outcomes or impact but support longer-term sector-level outcomes and impact, by improving sector knowledge and assets. It is of note that some outcomes and impacts sit across multiple levels – represented in the diagram with a dual colour box.

Individual-level outcomes and impacts

The final component is the individual-level, which are outcomes and impact directly on individuals and their lives, with a specific focus on the end beneficiaries’ women and girls. A common short-term outcome for individuals was improved access to resources, guidance and shared learning (also an organisational-level outcome, as above), which leads to a longer-term outcome of improved physical and mental health management for women and girls.

A second common short-term outcome identified was improved access to quality support for women and girls in need, which projects expect to lead to increased help-seeking and improved confidence in the longer-term. These outcomes are expected, in some cases, to be a result of organisational and/or sector-level outcomes, for example, improved skill in organisations would support improved access to quality support for women and girls.

Ultimately, projects focused on the individual-level expect these outcomes to create 2 potential individual-level impacts: a reduced risk or experience of violence and harm, and/or women and girls being empowered to act as their own agents.

Fund impact

The final section of the LM presents the Fund-level impacts. These are the higher-level impacts that stakeholders involved in designing and rolling out the TTF expected to see achieved through the Fund.

Ultimately, these project-level outcomes and impacts are expected to help the TTF achieve broad, overarching impact. 3 core impacts are identified by stakeholders and projects, focusing on impact for end-beneficiaries (women and girls) and at the sector-level. These are:

  • Beneficiary impacts: the TTF intends to have a positive impact on the lives of women and girls, and especially on those facing disadvantage, vulnerability and harm.

  • Sector impacts: the TTF intends to have a positive impact on the types of initiatives available to women and girls, through a strengthened offer of locally relevant initiatives provided by specialist and community organisations.

    Furthermore, it is intended that the TTF creates a lasting impact on the women’s and not-for-profit sectors through the provision of sustainable and/or ‘snowball’ interventions (new interventions which emerge from or are inspired by the original funded activities), increased sector resources and knowledge, and improved sector networks.

This section presents the overarching evaluation requirement and recommendation. It presents the key research questions and outcomes the evaluation is expected to deliver, highlights the key design considerations (based on the analysis provided in the previous sections), then presents the recommended evaluation approach. The evaluation plan is articulated in more detail in the subsequent chapters.

Key implications for design

There are 5 key learnings from the scoping phase that have significant implications for the TTF evaluation design.

1. There is no single way to assess the impact of the TTF, because it has no common outcome measures across projects. The variety of interventions and delivery models means it would be impossible to identify overarching outcome measures that would be relevant to all projects. Instead, a design is needed that can measure the impact of different projects and their intended outcomes.

2. Due to this diversity, it will be important to look across all funded projects to assess impact.  A sampling approach is not desirable because the range of interventions means that a sample would not capture the full breadth of projects, especially those that are niche. A sampling approach would therefore not accurately reflect the true impact of the TTF.

3. All projects have some form of intended monitoring and/or evaluation built into their project plans, each of which is specifically designed to fit the project’s needs and to assess its chosen outcomes. Conducting fund-level evaluation activities on top of this would create additional burden for projects and potentially duplicate planned work.

Where possible, it would be beneficial to rely on local monitoring data and project evaluations. As well as minimising burden, this would maximise the evaluation budget to enhance the reach and depth of the evaluation. Key to this approach will be maximising rigour and robustness in local monitoring and evaluation work, to ensure it is fit for purpose and can be used for the Fund evaluation.

4. The original scoping phase was focused on assessing impact evaluation feasibility for the final year of the TTF (alongside design of a process evaluation for all Fund years). However, the diversity of projects across all years and the stated intentions of using location evaluation data means there would be value in including projects from past years in the evaluation if robust evaluation data exists.

5. Though not originally included in the evaluation scope, Kantar Public suggest that sector-level impact should be in scope of the evaluation. Whilst project-level impact is a key focus and impact on women and girls is the ultimate aim of the TTF, evidence from the scoping phase highlights the importance of identifying the impact of TTF on the sector itself. As such, the inclusion of sector-level impact in the evaluation is key.

Research questions and outcomes

The evaluation should respond to the following questions, as initially proposed by DCMS

1. Which organisations received TTF funding, how was it used; and who did it reach? Who did it not reach and why?

2. How was TTF delivered and experienced (by stakeholders, grant holders and end-users)?

3. What effect did changing local and national government contexts have on delivery?

4. Overall what worked well, and less well?

5. What impact (actual/perceived depending on type of evaluation) did TTF funding have on grantees and end users/beneficiaries?

6. What, if any, were the unintended consequences of the programme?

Alongside the above, Kantar Public recommend adding the following as a key evaluation question: What, if any, impact has TTF funding had on the women’s and girls’ sector?

The evaluation approach below is designed to achieve these outcomes and answer the key questions.

Kantar Public recommends a multi-strand evaluation approach to cover project and fund-level impact, sector-level impact and process evaluation. Across these 3 strands, a range of qualitative, quantitative and secondary data analysis techniques would be required.

Our recommended evaluation plan is outlined below.

Impact evaluation

  • Project evaluation support including consulting for local evaluation and M&E guidance notes

  • Project evidence synthesis including synthesis of monitoring data and local evaluation reports

  • Analysis of public data analysis at a sector level

  • Qualitative Impact Protocol (QUIP), used to identify sources of impact in complex environments and could be used at the fund and/or sector levels for different purposes

  • Project and sector survey including both successful and unsuccessful bidders and wider sector organisations

Process evaluation

  • Remaining bid review

  • Project case studies, to include 10 – 15 case studies covering both current and recent past projects

  • Depth interviews (15 to 25) with unsuccessful bidders and internal stakeholders

The recommended evaluation will offer the following:

Impact evaluation

At the project/fund level, Kantar Public recommend direct consulting with and providing guidance for the 14 final projects to ensure robust monitoring and evaluation data is available and can be utilised in a project evidence synthesis to demonstrate fund-level impact. This may be supplemented by a Qualitative Impact Protocol (QuIP) to further explore the impact of TTF funding on its beneficiaries (for example, an organisation and/or women and girls).

At the sector level, Kantar Public recommend that secondary analysis of publicly available data (for example, on sector finance and funding) be supplemented by survey data collected directly from sector organisations, and a QuIP to further explore the impact of the TTF on the sector and organisations.

Process evaluation

For the process evaluation, a survey of sector organisations (with priority for successful and unsuccessful bidders) exploring TTF awareness, application and delivery experience will be supplemented by the remaining bid review. Kantar Public also recommend 10 to 15 project case studies to showcase individual project experiences and outcomes, and 15 to 25 in-depth interviews with unsuccessful bidders and wider stakeholders (the scale of these elements will be budget-dependent).

Our recommended design aims to maximise budget and reduce burden by using data from some methods for multiple purposes (for example, survey data supporting both the process and impact evaluations). Further, our recommendation is to collect data for each evaluation strand from multiple perspectives and in different ways. This will allow us to deliver a more robust final evaluation assessment. It also mitigates the risk of relying on one set of data or one perspective in isolation, improving the overall robustness of the results.

Full detail and rationale for our recommendations are provided in the subsequent sections.

Process evaluation

This section presents the recommended approach for the process evaluation strand of the Tampon Tax Fund (TTF) evaluation. This includes an outline of the proposed methodology and key considerations for shaping research activities and indicated approach.

Bid review

As a key first step in the evaluation, Kantar Public recommend a review of the bids not already reviewed in this scoping phase be completed, ensuring that all funded projects are reviewed and recorded for analysis. The remaining bids should be analysed in the same manner as in the scoping phase and using the same analysis framework already created.

Researchers should be mindful of where new categories need to be created to capture any differences in the remaining bids, updating the framework (and previous records) accordingly throughout the process. By the end of this process, it would be possible to summarise all TTF projects in a consistent way, enabling analysis and summative reporting. This information will also be used to select case studies and may also be used in impact analysis.

Project case studies

Case studies of funded projects will be a central feature of the process evaluation, as a means to explore in greater detail projects’ rationale for their bid and bid design, experience of delivery and lessons learnt through this process.

Kantar Public recommend 10-15 case studies for the evaluation, dependent on DCMS preferences. This number will allow for good coverage across a range of potential selection criteria (noted below) within the available budget and will allow for both in-case analysis to provide robust examples of the Fund’s interventions and outcomes. It will also enable analysis to identify overall insights and learnings from across the whole to create the summative findings for the process evaluation. While more could be included, these would be unlikely to provide much additional value in terms of lessons and findings.

Case study selection

The following presents the proposed approach for case study selection. This process will begin by first agreeing on the selection criteria. This will be a point for discussion with DCMS, but Kantar Public suggest the following as a starting point:

  • Year: aiming for a mix of current and past projects, recognising that past years will likely need to be the recent past, as the relevant people will still need to be in-post to take part. Kantar Public suggest agreeing to a cut-off year where we do not include cases before that time. Current projects would focus more on the bid and in-process or recent delivery experience. More established projects will focus more on the delivery experience and how lessons have supported subsequent work.
  • Funded once or multiple times:  most organisations were only funded once, but Kantar Public suggest including one case study with an organisation funded multiple times to explore the evolution of their approach and how learning has been applied.
  • Delivery model:  Kantar Public suggest ensuring a mix of cases by delivery model, as this is crucial to the delivery experience.
  • Immediate beneficiaries:  Linked to the delivery model is ensuring a mix of cases based on who the immediate beneficiaries are – this could be women and girls in need, those supporting women and girls in need, or organisations that support women and girls in need.
  • Intervention type(s): Cases should represent a mix of intervention types, ensuring broad coverage across all seven categories identified. Alongside this, we should ensure a mix of projects doing one type of intervention vs those undertaking multiple types of interventions, as these will have vastly different delivery experiences.
  • Theme:  Kantar Public suggest ensuring a mix of projects across different themes and, within this, different project topics. This should include examples of common topics (such as VAWG) as well as some cases with specialist focus areas, such as period poverty, pregnancy and others.

Once these criteria are agreed, Kantar Public recommend identifying a short-list of case study projects and to highlight those which are particularly good options for being able to answer research questions, as well as making for relevant learning for DCMS (for example, where projects are doing something particularly innovative or different, or with a unique audience etcetera).

We will then agree on a preliminary list of case studies to engage, with a small number of ‘reserve’ cases that will be contacted should any of the preliminary options not be viable (for example, people not in post, or not willing to take part).

Case study composition

The approach to case studies will be very flexible. The exact composition of each case study will depend on the delivery model and interventions being undertaken. Kantar Public expect each case study to need around 5 interviews or mini focus groups in total, one of which may be a small focus group with beneficiaries where applicable.

The recommended approach is for all case studies to begin with an interview with the project lead, namely the person with overall responsibility for the funding at the awarded organisation. This interview will be focused on the rationale for the bid and interventions, an overview of the activity and progress to date, views on the delivery process and any lessons learnt, and any perceived outcomes or impacts. Discussions with leads will also be used to map out relevant stakeholders for subsequent interviews.

Subsequent interviews will depend entirely on the interventions taking place and who the most appropriate people are for the process evaluation research. For example:

In Scenario A, funding would go directly to women and girls in need, for example, by providing equipment, training or support (Delivery Model 1). In this case a project lead interview would likely be followed by interviews with those managing the delivery of interventions to women and girls, followed by interviews or mini-groups (as appropriate) with the women and girls receiving the equipment, training or other support

Scenario B would be for projects where funding is being used for capacity building within organisations receiving funding (Delivery Model 2), such as training those who will provide support to women and girls in need. In this case, the lead interview will be followed by interviews with those who managed and/or delivered capacity building (including any training partners) and interviews or mini-groups with those who received this capacity building support. (Please note that those in scenario B may also be, for example, women and girls training to provide peer support to others in need. This means they could be grouped as either direct and end beneficiaries. For the purpose of the evaluation, they would be categorised as beneficiaries of training not as end beneficiaries).

In Scenario C, funding is being given to other organisations (onward grant giving projects) to deliver interventions to women and girls in need or those who support them (Delivery Models 3 and 4). In this case, lead interviews will be followed by interviews with those involved in or responsible for the onward grant process – for example bid design, evaluation, selection and management. These will be followed by interviews or mini-groups with organisations which received these onward grants.

It is expected that women and girls, as the ultimate – or end – beneficiaries would only be included in case studies where they were involved in the delivery process and should be able to comment on the delivery experience as recipients. Where they were further removed – as in scenarios B and C – they are unlikely to be included as they were secondary beneficiaries of the funded interventions and not directly involved in delivery.

The discussion points for interviews would vary depending upon the participant type in question. As mentioned above, the lead interview would establish a high-level understanding of the project and who is involved, providing an overview of bidding and delivery experiences, outcomes and lessons. Interviews with those leading delivery of specific interventions (or their delivery partners) would focus on the delivery experience, what has worked well/less well, and lessons learnt, including any activity delivered to support organisational capacity and sustainability.

Interviews with those in receipt of funded interventions would focus on their experience of capacity building, training, equipment, etcetera (as relevant) and their views on what went well/less well, with a light-touch focus on outcomes and impact.

For those in receipt of onward grants, discussions would include questions on their experience of delivering their funded interventions. In some cases, individuals may have multiple roles (for example, bid and delivery lead), in which case they may take part in multiple interviews (though this will be rare).

Case study outcomes

Kantar Public recommend that case studies be reported in two ways. 1. Each case study should have a standalone write-up to summarise process learning, which will serve as an example in the report of different experiences and their outcomes. 2. Case study insights should be analysed overall to create summative evidence for reporting purposes. For example, this might include the combined experience of bidding and delivery, what worked well / less well in delivery and the lessons from this, to bring all case studies and learning together. We suggest that case study reporting could also include project-level monitoring and evaluation data being gathered as part of the impact evaluation.

Unsuccessful bidder and stakeholder interviews

Alongside project case studies, it is vital to gather data from unsuccessful bidders and government stakeholders involved in the process internally. These alternative perspectives on the process will help explain how it could be improved in future.

Depending on budget preference, Kantar Public suggest 15 to 25 interviews with these groups; this number was determined based on how many is needed to robustly answer the evaluation questions, what is possible within budget (accounting for other elements) and what we think will be viable across all fund years (taking into account people no longer being in post) [footnote 2]:.

Unsuccessful bidders

Kantar Public suggest including 10 to 15 interviews with unsuccessful bidders from the current and previous TTF years. We will agree on the organisations to approach with DCMS, looking – as with case studies – for a mix of intended delivery models and interventions, those that applied multiple times or only once, and organisational characteristics. Kantar Public recommend that DCMS facilitates this contact, sharing details of those who consent to take part.

Interviews will mainly focus on their experience of the bidding process – namely why they applied, intended interventions and their rationale, experience of bidding, what worked well/less well and thoughts on specific aspects of the process (for example, application, resource requirement, communication and support from DCMS, etcetera).. Interviews will also touch on the consequences of not receiving funding, which is useful for the process evaluation but will also feed into the impact evaluation findings on how the TTF has impacted sector funding (additionality).

The interviews will offer an alternative view on the bidding process to the experience of successful bidders, providing a single, more comprehensive view of the process and lessons to be learnt.

Stakeholders

The internal government stakeholders involved in the TTF bidding and delivery process  offer valuable insight. These could be those who developed/managed the bidding process, those involved in assessing bids and making recommendations, and those involved in delivery of the TTF (for example, dispersal of funds, monitoring progress, or engaging with projects).  Kantar Public suggests 5 to 10 interviews (or mini groups of 2 to 3 relevant people) with stakeholders, aiming for a mix across the types of stakeholders. We would agree on this with DCMS and ask for DCMS to make an initial introduction to support recruitment.

Interview content and length would depend on the role each person played, but will focus on their experience of their involvement, the process itself, and lessons for what worked well / less well in this process, including – where relevant – any activity delivered to support organisational capacity and sustainability. Those only involved in bid assessment, for example, could be 30 minutes, whilst those involved in overseeing bidding, selection and delivery will likely need 60 minutes.

The outcome of these interviews will be a clearer picture of the internal process of running TTF and any lessons for the future to support funding design, bidding and delivery.

Qualitative research safeguarding and ethics protocols

It is important to recognise that many of those working for the organisations funded by the TTF have similar lived experience to the women and girls they support, such as domestic abuse, sexual harassment or sexual assault. While these interviews will not ask directly about these experiences, personal experience and/or stories of those supported may emerge or be recalled during discussions. It is therefore vital that both interviewers and participants are properly supported to ensure there is no mental or emotional harm as a result of the research.

Recommendations for managing this are outlined below.

First and foremost is ensuring informed consent amongst all who take part. Those invited to take part in the research will have the research, its objectives and the focus of the interviews clearly explained before they agree to take part. This will include clarity on what will be covered in interviews and that neither their own personal experiences nor that of those they support will be directly questioned in interviews. The recruitment invitation will explain this clearly and include a list of frequently asked questions with more detailed information and resources.

Voluntary participation

It will be made clear that all aspects of the research are voluntary and in their control. This means they are not obligated to take part, can withdraw at any time, and are free to refuse to answer any questions if they are not able or willing to do so, no questions asked. It is important to ensure respondents do not feel coerced and are in control of their involvement at all times.

Trauma-informed interviewing

All researchers conducting interviews should be experienced interviewers and have received appropriate briefings, training and guidance for conducting these interviews. Those who have completed formal trauma-informed training should help train and monitor other team members to ensure a consistent approach. A crucial element of case study interviews will be interviewer continuity, for example, one person conducting all interviews for that case study, which minimises the need to ask some contextual questions multiple times.

Participant resources:

All those taking part in interviews should receive the list of frequently asked questions before the interview, which will include a set of resources for support. At the end of the interview, participants should be referred back to this document should they need it and be asked if they want it to be sent to them again.

Interviewer support:

Interviewers should also receive post-interview support if they are affected by interview discussions. This support should include: all interviewers should have an assigned ‘buddy’ (another interviewer in the team) to provide guidance, support and a listening ear after interviews if needed the team should meet regularly to discuss interview progress and any difficulties arising  the team members should have a regular check-in with the project director throughout fieldwork to support as needed, as well as the director being available at any point to support after particularly difficult interviews

A set of protocols and guidance specific to the TTF evaluation will be put in place to ensure this level of safeguarding and support is built into the process, to minimise the risk of harm for everyone involved.

Process and impact: online surveys

To supplement the qualitative data (gathered through interviews and case studies), Kantar Public suggest also including an online survey to capture a wider range of views on the TTF. This evaluation element will be central to the process evaluation but will also contain questions to support the impact evaluation.

Kantar Public propose to conduct an online survey with 3 groups to provide evidence for both the process and impact evaluations: successful bidders, unsuccessful bidders; and other organisations in the sector that did not bid for TTF funding.

We will have separate survey links for successful, unsuccessful and other organisations, so we show respondents the content that is relevant to them. We can then also use this to test our assumptions and understanding later in the analysis.

Kantar Public will agree how best to send the survey to respondents once the mainstage evaluation is commissioned, based on further discussion and the outcome of the initial exercise to agree which organisations are in/out of scope. If needed, we can divert some budget to complete telephone interviews for those who do not complete a survey but where we feel their perspective is vital to the evaluation.

Survey content

The survey will be designed to provide breadth of coverage and data on the TTF process, which will complement the depth of the qualitative elements described above, to build a more comprehensive picture of the Fund. For the intended groups, Kantar Public recommend a survey of around 10 minutes to keep the burden to a minimum and maximise response.

For the process evaluation elements of the survey, a key focus will be to understand awareness of the TTF, decisions to apply, experiences of the bidding process, impact of unsuccessful applications, experience of delivery if successful and lessons learnt.

The survey will be used to understand all aspects of the TTF experience across all audiences, adapting iteratively based on responses so each organisation only responds to relevant content.

Surveying TTF bidders

For successful bidders Kantar Public can use the survey to collect information about organisations’ experiences of bidding for funding, the administration of the Fund, setting up their project, and delivering their interventions. We can also collect information about their perceptions of the impact their projects had on women and girls, and the impact of the Fund on their organisation and the sector as a whole. Conversely, unsuccessful bidders can provide valuable evidence about the administration of the bidding process, as well as their perceptions of the effects of the Fund on the sector.

Surveying other organisations

Alongside this, other organisations in the women and girls’ sector will be surveyed to understand wider awareness and perceptions of the Fund across the sector. Kantar Public can also gather information about the reasons why organisations did not bid for funding, and whether they would have bid under different circumstances. This evidence could be useful for informing the design of any future funding mechanisms.

Survey logistics

The questionnaire will be developed by Kantar Public in partnership with DCMS. Once agreed, it will be scripted and undergo internal testing to check its functionality before launching to the sector.      We have not included any cognitive testing of the survey in our approach, as it was not felt to be necessary for these groups or content. However, the online survey should be optimised for all electronic devices and browsers to maximise response options.

Kantar Public suggest that the survey is run predominantly or exclusively online, for the convenience of those being surveyed. Depending on the mode used to invite people to the survey (which will influence engagement and response rates, as above), different approaches can be used to boost response rates.

At this stage, Kantar Public do not recommend using incentives. Those completing the survey will be doing so on behalf of an organisation and in a professional capacity. In cases like this, using incentives for surveys is less common and, in our experience, many people are not able to accept incentives under these circumstances.

Analysis and reporting

Once the survey is completed, the data will be cleaned and tabulated for analysis. If we achieve a minimum number of responses, Kantar Public recommend weighting the data to account for non-response bias (the fact that some organisations are more likely to take part in the survey than others, which could affect the overall result).

The form this weighting will take will depend on the final design of the survey sample which, in turn, depends on how the wider sector is defined.

Provisionally, we suggest incorporating the following characteristics in the weighting: experience with the Fund (whether the organisation is a successful bidder, an unsuccessful bidder, or had not bid), year the organisation was first recorded by the Charity Commission, income, and number of employees. This will ensure that the sample is broadly representative of the population of interest, according to these characteristics.

Survey analysis options will depend on the response rate. Larger sample sizes will enable more robust quantitative analysis: this is likely to be basic descriptive statistics from the data (broken down by subgroups as relevant). In the case of fewer responses, some of the analysis is likely to be essentially qualitative in nature. Therefore, it will not be numerically representative, nor will it necessarily be comprehensive qualitative research, as it may not represent the full range and diversity of experiences.

However, process evaluation analysis will be a triangulation exercise; the survey data will be combined with insights gathered via other elements to create a single assessment of the TTF process.

Project-level impact evaluation

This section presents our proposed approach to evaluate the impact of the Tampon Tax Fund (TTF) on grantees and beneficiaries, and explore the unintended consequences of the programme. It is a proposal for project-level evaluation that focuses on:

(i) supporting individual projects’ evaluations, and

(ii) synthesises the evidence generated by individual projects.

Overview

The project-level impact evaluation will assess the impacts of individual projects on the lives of women and girls supported. As discussed, the diversity of the funded projects means it is not feasible to combine the funded projects within a single impact evaluation framework. Moreover, the time and available budget does not allow for individually tailored impact evaluations to be conducted for each project. 14 out of the 33 projects reviewed have engaged an independent evaluator for their own project (8 out of 14 projects in the current cohort); hence the overall TTF evaluation should coordinate with, and support, these independent evaluations, and avoid duplicating work or placing additional demands on organisations or individuals.

One option would be to conduct impact evaluations only on 2 or 3 selected projects. However, this would not account for the range and diversity of the Fund. Instead, Kantar Public propose that each project in the current cohort should be responsible for conducting its own evaluation of its activities as per plans already agreed locally.

Our role will be (i) to support projects so that the evidence generated is as insightful as possible, and (ii) to synthesise the evidence generated across these projects to assess the overall impact of the Fund. The final TTF evaluation report can then include summaries of key findings from each project, as well as a narrative synthesis, drawing out common findings or experiences across groups of projects.

Viability of evidence from projects

Part of our role will be to enable grant recipients to gather the most robust evidence about the impacts of their project as is feasible within the limitations of the evaluation. As well as the time and budget available for the evaluation, another limitation will be the capacity and experience of the grant recipients themselves to do this kind of research. It is likely that the quality of evidence will vary substantially between projects.

Based on our initial assessment, Kantar Public have grouped the 14 projects in the current programme as follows:

  • 2 projects where experimental or quasi-experimental impact evaluation methods may be viable (levels 4 and 5 of the Maryland Scientific Methods Scale)[footnote 3]. These approaches are generally thought of as providing the strongest evidence about the impacts of interventions.

  • 6 projects where it may be viable to collect some form of pre-intervention and/or comparison group data for key outcomes (levels 1 to 3 of the Maryland Scientific Methods Scale, depending on the details). At the start of the evaluation, our priority for these projects will be to try to ensure there is a suitable comparison group in their evaluation plans wherever possible, and to ensure data collection plans are in place.

  • 2 projects which are offering more intensive support to a relatively small number of women and girls. Given the scale of these projects, Kantar Public do not think quantitative impact evaluation will be feasible or appropriate. Instead, our support for these projects is likely to focus on qualitative methods and the quality of any project monitoring information.

  • 4 projects which are primarily providing onward grants to other organisations. Quantitative impact evaluation is not viable as the impacts are going to be spread out across the activities of a larger number of smaller organisations which will not be identifiable at the start of the evaluation. Kantar Public recommend using the Qualitative Impact Protocol (QuIP, see below) to investigate the impacts of these projects.

Kantar Public identified 2 projects where – subject to the availability of data – experimental or quasi-experimental methods are likely to be viable.

Kantar Public therefore propose to support the 14 projects with their monitoring and evaluation efforts to maximise the data that is collected and which can then be reliably and robustly used in our evaluation.

Project-level consulting and evaluation support

Our aim will be to provide flexible, tailored support for each project to help maximise the value of the evidence they are able to generate. Kantar Public recommends 2 to 5 days for this, depending on budget flexibility. This support will be a combination of general advice and guidance that will be relevant across projects, and specific assistance tailored for each individual project.

Our support will include:

  • providing a general guidance document to all projects with information on good practice in developing evaluation plans, collecting data, analysing and reporting findings
  • templates to support projects developing their own Logic Model or logic model
  • recommended survey questions or other metrics which may be relevant across different projects
  • open ‘surgery sessions’ at regular intervals (for example, monthly) where any project can join to ask questions, raise issues or discuss challenges
  • additional meetings and email contact as needed to support projects at specific points in their monitoring and evaluation process
  • scheduled check-in points with each project across the delivery and post-delivery to assess progress and feedback any interim learnings to support remaining work

For each project, Kantar Public will start by allocating a dedicated team member as the project’s primary point of contact. This person will review the monitoring and evaluation plans, and work with the project to refine the plan, add greater detail, and provisionally agree on the key points where our input will be most useful.

Examples of project-specific support could include:

  • coordinating a workshop to help the project develop or refine its own Logic Model
  • advice regarding possible comparison groups and types of analysis
  • helping develop a structure for questionnaires, topic guides or other research materials, offering some examples of ‘good practice’, reviewing and offering advice on research materials produced by projects
  • providing advice on data collection methods
  • attending analysis meetings
  • support with troubleshooting issues as they occur

Kantar Public think it is critical that projects are clear that they are responsible for their evaluation activities: we can offer support and guidance, but the responsibility for conducting the evaluation – including decisions about the design of the evaluation – sits with each project. There is not enough time or budget for the central evaluation team to take control of the project-level evaluations, and so the best results will be where projects take ownership of the evidence being generated.

Reviewing impact evidence from earlier rounds

As part of the process of bringing together the findings from the project-level evaluations of the current programme, Kantar Public will also review the evidence regarding impacts in earlier rounds. Alongside the end of grant reports, which provide excellent summative evidence of project achievements, many previous projects will have produced monitoring and evaluation reports or other evidence about the impacts of their interventions.

Kantar Public will review such reports so that any relevant findings from the reports can also be incorporated into the synthesised analysis of project-level findings. We will assess the quality of this evidence and to ensure that it is fairly represented in the synthesised analysis.

How this evidence is used in the final report will depend on the overarching findings, and what emerged from the analysis of these reports. We expect this could either take the form of an evidence review, which would show examples of impact (based on, for example, delivery models, intervention types, audiences or themes) or use these project-level examples to reinforce the overarching findings.

Sector-level impact evaluation

This section presents our proposed approach to evaluating the impact of the Tampon Tax Fund (TTF) on grantees, beneficiaries and the women and girls’ sector as a whole, and to exploring the unintended consequences of the programme. For this purpose, Kantar Public propose a sector-level evaluation involving financial and network analysis, to assess the role of the TTF in the wider funding context of the women’s and girls’ sector and its contribution to building connections between organisations.

Overview

As expressed in the TTF Logic Model, some of the core objectives of the Fund relate to impacts on the women’s and girls’ sector as a whole: improved partnerships and collaborations, improved knowledge sharing, building capacity within the sector, and ultimately supporting a greater focus on prevention rather than response. The impact evaluation should therefore address the evidence regarding these sector-level impacts.

The women’s and girls sector is not clearly defined and consists of a wide range of interconnected organisations and types of interventions. Therefore, it is not feasible to find a genuinely comparable group (sector) in order to use experimental or quasi-experimental analysis methods.

  • Instead, Kantar Public propose a theory-based approach, allowing us to bring together evidence from a range of different sources, including: project-level monitoring and evaluations
  • all elements of the process evaluation
  • documentation about bids and the delivery of the Fund
  • public data about the sector

Here we outline 2 additional forms of analysis that we plan to include: analysis of the role of the TTF in the wider funding context of the women’s and girls’ sector, and analysis of the collaborative networks of organisations within the sector.

Financial analysis of the women and girls sector

The broad research question is to explore how the funding context of the sector changed over the lifetime of the Fund. Kantar Public will look at the financial contribution of the Fund to organisations in total and as a proportion of all the funding they received.

By looking at their sources of income and how these have changed over time we can analyse the financial role of the Fund. The key data sources for this analysis will be the information recorded about TTF bids and public information about funding published by the Charity Commission.

Kantar Public will first look at the changes in income for organisations receiving TTF funding directly. This analysis will look at total income, as well as income disaggregated by source (with a particular focus on income from government grants and government contracts), and income from the TTF specifically.

We will then extend this analysis to the recipients of onward grants through the Fund. These will typically be smaller organisations, and one question to explore will be the extent to which these organisations are more financially dependent on support from sources such as the TTF. As part of this analysis, we can also compare funded organisations and recipients of onward grants against other organisations in the sector (which did not receive TTF funding) and against organisations in other sectors.

One limitation of this analysis is that the financial data published by the Charity Commission depends on the dates that organisations publish their accounts. Charities can submit their financial data up to 10 months after the end of their financial year, which vary across organisations. Kantar Public expect almost full coverage of the 2021 to 2022 financial year to be available in January 2023.

While this data may still have some gaps, we expect it to be sufficient to provide valuable insights about the impacts of the fund on the sector. However, to include the 2022 to 2023 financial year would likely require a significant extension to the reporting timetable for the evaluation into 2024. In terms of previous years covered, although the published data corresponds only to the last five years reported by each charity, Kantar Public can request historic data from the Charity Commission.

A second limitation is that there is currently no granular classification that allows us to identify charities in the women and girls’ sector. As with the process of identifying organisations for a sector-wide survey, Kantar Public will rely on key terms recorded in organisations’ charitable objects to define which organisations are in scope.

A further limitation is that data published by the Charity Commission covers charities in England and Wales, but not in Scotland or Northern Ireland. Equivalent data is published by the Charity Commission for Northern Ireland and the Scottish Charity Regulator (OSCR), although the format and structure of the published data is not the same. Kantar Public expect to be able to include data for Scottish and Northern Irish charities in some of the analysis, but there may be parts which are limited to England and Wales.

Network analysis of collaborations and financial connections

The broad research question is to explore the connections within the women and girls’ sector, the extent to which the TTF has been able to reach different parts of the sector, and the extent to which the sector has become more connected over the lifetime of the Fund.

The key data sources will be the documentation about funded projects and information about collaborations collected through the surveys

Analysis of financial flows

The analysis of financial flows will investigate the extent to which the Fund reached different parts of the sector. The analysis will account for both direct funding and onward grants to show how funding flowed through the sector. Kantar Public will explore the extent to which the funding flows provide support to organisations which may not normally have access to this kind of funding. This analysis will also help us to see if there are key points in the network which the Fund did not reach.

Additionally, we can explore how much funding flows stayed within the women and girls’ sector and how much funding reached organisations which are not explicitly focused on supporting women and girls (whether or not this was intended by the Fund at the time).

Analysis of collaborations and partnerships

Kantar Public will also analyse networks of collaborations and other partnerships. We will collect information about collaborations through the online surveys; this will supplement the information from the TTF bid documentation. The aim will be to see the extent to which TTF led to more support between organisations in the sector, which is one of the key objectives of the Fund as set out in the Logic Model. We can also look at the extent to which collaborations initiated through the Fund continued after the initial period of funding which will help us to understand the wider benefits of the Fund beyond the directly funded projects.

It is important to recognise that an increase in the connectivity of the network cannot necessarily be attributed to the Fund. For example, the National Lottery Community Fund’s Women and Girls’ Initiative has had some similar aims over a similar timeframe. However, this analysis can still provide an additional source of evidence to feed into a theory-based evaluation of sector-level impacts. It will also provide an important perspective on how the Fund operated in practice and how it interacted with the sector as a whole.

Online survey data

Kantar Public may also include questions in the online survey to supplement the data used for financial and network analyses. The exact content of this will be decided based on further discussion of public data sources, their uses, and the gaps we may be able to fill through the survey. Any data collected through the survey would provide additional data points to confirm or complement the main analysis. Topics that may be useful for the survey to explore include:

  • the funding sources that organisations have accessed over time, to understand the trends of TTF and other ‘competing’ or complementary funding sources in the sector
  • information on sector partnerships and networking, how this has changed in recent years, and why

One key part of understanding the impacts of the TTF is the extent to which the Fund affected organisations’ capacity to deliver. Kantar Public can partially address this through the surveys by asking successful bidders to reflect on the extent to which they think they would have been able to deliver their planned activities. Similarly, Kantar Public can collect information from unsuccessful bidders about the extent to which they were able to conduct similar activities without the TTF. Broadly, we can then categorise projects as follows:

  • those where organisations could deliver similar activities at a similar scale, even without the funding (for example, through alternative sources of funding)
  • those where organisations could deliver similar activities at a reduced scale without the funding
  • those where organisations could not deliver these activities at all without the funding

Kantar Public will also use the survey to enable recruitment for the Qualitative Impact Protocol (QuIP), described below.

Project and/or sector-level impact: QuIP

This section outlines the proposed use of the Qualitative Impact Protocol (QuIP) for impact evaluation. This includes an explanation of the QuIP methodology and approach, before exploring opportunities for application in the context of the Tampon Tax Fund impact evaluation.

What it is and how it works

The Qualitative Impact Protocol (QuIP) is a qualitative impact approach developed by academics at the University of Bath specifically to assess impact in complex environments (run through Bath Social and Development Research, BSDR). It was originally developed in the context of international development evaluation where it can be difficult to unpick impact or attribution in a context of many and varied funding sources, interventions and local socio-economic factors. The QuIP has been used successfully in more than 70 evaluations around the world for organisations like Tearfund, Habitat for Humanity and the UN World Food Programme. Published reports from these projects can be found on the Bath SDR website.

QuIP was designed to address the ‘attribution problem’ in this environment and aims to collect credible information directly from intended beneficiaries on significant drivers of change in selected domains of their life over a predefined period of change. This is particularly useful in complex contexts where a variety of factors that are hard to disentangle can influence the outcomes of an intervention. In such situations, quantitative impact measurement – though necessary and useful – needed an additional evidence base to assess impact within the wider context, exploring to what degree the project being evaluated was achieving impact given the competing interventions (for example from other organisations), funding sources and confounding factors (such as socioeconomic or local context that would also influence impact). The approach was designed and works best when paired with quantitative data (for example from surveys, public data or project MI data), to enable comparison and triangulation of results.

QuIP has a number of key features that make it a unique form of qualitative research specifically designed to address the attribution problem. QuIP key features can be found on the Bath SDR website as outlined below.

Purposive sampling:

QuIP uses a purposive case selection approach to identify ‘positive deviants’ and ‘negative deviants’ for interviews, namely those people who experience the strongest degree of positive or negative change, whilst limiting or excluding those with no change. This is done so that QuIP interviews are focused on stories of changes where they exist, to ensure coverage of all impact scenarios and not using valuable resource interviewing where no change is experienced.

Exploratory, subject-withheld interviewing:

Qualitative data collection takes place with little reference to the specific activity being evaluated, giving equal weight to all possible drivers of change (the interventions or others). Key to this is never telling respondents that the research is about a specific intervention (for example, TTF in this case), but allowing them to list and explain all possible influencing factors of change. This may or may not include the intervention in question. In some cases, the interviewers are unaware of the client and project in focus, to eliminate the risk of biased moderation.

In the UK, we are often limited in the degree we can withhold information about the client or subject  because of requirements for transparency. An approach is taken to adjust this on a case-by-case basis depending on what is allowable, often stating the client funding the research and the broad topic, but without specific reference to the fund in question.

Causal map:

QuIP then uses a form of qualitative thematic analysis to code and present data, which produces causal maps that bring to life causal pathways to demonstrate perceived impact. This is done in qualitative analysis software specifically designed by BSDR for this purpose, which is called a Causal Map.

Together, this creates a unique addition to impact analysis tool-kits but offering an alternative view on sources of impact via an open, respondent-led approach. More detail about the QuIP can be found at the Bath SDR website.

A single QuIP is typically 24 in-depth interviews. This can be doubled in situations if the remit is very broad or the potential impacts are very diverse, whereby a single QuIP would not have the necessary reach to be robust. Multiple, single QuIPs can also be used in an evaluation to explore different groups, for example, one QuIP for different intervention types, one per location, etcetera. as seems relevant.

Options for using QuIP

Given the diverse nature of TTF funded projects and the women’s sector overall, Kantar Public believe the QuIP will provide a useful additional source of data with which to assess Fund and sector impacts. Discussion will be needed to make a final decision about where and how to use the QuIP, but suggest the following two options as most useful to our needs:

Sector-level: given the complex funding environment in the sector and the interest in exploring sector-level impact, a sector-level QuIP would be a useful addition to the evaluation, as it would provide a different set of data and perspective on sector-impact to supplement other data sources. In this case, Kantar Public would look to interview people in organisations across the sector to explore their thoughts on sector funding and what has been impacting this (positively and negatively) over time, and how this may change in the future. This would help to put the TTF’s sector impact into a wider context to understand how it has influenced the sector to date and if/how its absence is likely to impact the sector in future. This would complement the financial analysis and survey data.    

We suggest a single or even double QuIP for this purpose, given the size and shape of the sector. The design of a sector-level QuIP and recruitment would be through the sector survey. Survey responses would be used to determine the best criteria to identify positive/negative cases from across the sector (not just those which received TTF funding). The survey would include questions asking for permission to recontact for this purpose only, which would then be used to recruit relevant cases.

Fund-level: the diversity of the TTF’s funded project and absence of common outcomes makes demonstrating the impact of the Fund and its projects challenging. While Kantar Public have recommended an approach best able to enable this, there is the option to add the QuIP at the fund-level to explore the impacts of different interventions and/or delivery models. For example, it would be possible to run a QuIP looking at the impact of the TTF through the model of onward funding; in this case, we would speak to organisations which received TTF funding to understand the organisation, what it delivers, and what is impacting their success – assessing the degree to which TTF influences this.

This could be duplicated for other interventions (such as capacity building or training/equipment direct to women and girls), if desired. In this case, we recommend a single QuIP for each focus intervention type.

Kantar Public firmly believe using the QuIP methodology strategically to fill gaps and add alternative perspectives will strengthen the final evaluation outcome considerably.

Final analysis and outputs

This section outlines the final analysis and reporting plan for the evaluation. This includes a summary of the analysis framework, reporting mechanisms and deliverables proposed.

Analysis evaluation framework

The recommended evaluation plan outlined in the previous section will produce a vast quantity of evidence for both the process and impact evaluations, from multiple sources and perspectives. The different sources of evidence and which evaluations they are expected to support are summarised in Table 2 below (with large X being a primary contribution and Y being a secondary contribution).

Table 2. Sources of evidence for evaluations

Process Project-level impact Sector-level impact
Bid review X   X
Project case studies X Y  
Unsuccessful bidder interviews X Y  
Stakeholder interviews X    
Online survey of bidding and other sector organisations X   Y
Project-level evidence review Y X  
Financial analysis     X
Network analysis     X
QuIP   X X

Analysis framework

It will be important to analyse each of these in isolation to assess what each tells us for process and impact evaluation, then bring together all data points to understand the entire picture and make an overall assessment.

To this end, Kantar Public recommend using an analysis framework to support and structure this process. This should be developed specifically for the TTF evaluation, with one framework created for each of the evaluation strands (process, project and sector). The research team should analyse each source in isolation and summarise this in the framework (alongside a complete and more detailed written summary for use in reporting).

Once this is done, analysis sessions for each of the process, project-level and sector-level impact strands should be used to triangulate the data and discuss the findings. Key to these sessions will be understanding points of agreement and points of tension across the data, then determining the implications of this for answering each research question. By the end of these sessions, the intended narrative and findings for the evaluation should be clear.

Final reporting

Kantar Public expect the evaluation to result in a series of deliverables which bring together the findings from all strands of the evaluation (process, project-level impact and sector-level impact) into a single report. This will have a common narrative and integration across elements where appropriate (for example, referencing findings in one section which support another).

The exact approach to this will depend heavily on the final analysis and how we choose to use each data source in the final reporting. In all cases, Kantar Public expect some data to take a more central role in the analysis and findings, where it provides the most or most robust evidence for a specific question or strand, whilst other data is likely to play a supporting role. For example, sector-level analysis will rely heavily on financial and network analyses, and results of the QuIP. However, it is likely that data from the survey and findings from the case studies or evidence audit could be used to support these findings where relevant (for example, through examples of impact).

Kantar Public recommend confirming the plans for each once the analysis is in progress, and we have a clearer picture of the findings and how the data will need to be handled as a result. Kantar Public will also agree on the structure, contents, format and length of each with DCMS, in advance of drafting.

Final deliverables

We have assumed the following standard deliverables for the evaluation, however final specification will be determined by DCMS.

Interim findings report: Kantar Public suggest a report be delivered at an interim point in the evaluation to share any emerging or interim findings. The exact timings would be agreed pending the agreed timeline and at a point that would suit DCMS’ requirements. The interim report could be a Microsoft Word report or presentation.

Final report and presentation: Kantar Public would then deliver a final written report and presentation at the end of the evaluation, bringing together findings from all evaluation strands into a single, comprehensive assessment of TTF process and impact.

This is typically up to 100 pages and a 2-hour presentation, with the exact structure agreed in advance. This would include an executive summary that could function as a standalone report.

Additional deliverables or dissemination activities

Kantar Public recognise there is an appetite to ensure the evaluation learnings are widely disseminated and accessible to different audiences. Without knowing the exact results and what would be most suitable, we suggest setting aside a budget that can be used flexibly for this purpose. Examples of how what we have done in the past include:

  • sessions run with projects throughout the evaluation lifecycle for them to talk to each other about challenges and learnings, and for us to give some insight from our work throughout rather than waiting for the final reports
  • learnings workshops at the end of the evaluation with the sector or bidders from all years to presenting findings and enable discussion to refine the final report and/or inform recommendations and learnings for future
  • additional presentations to wider audiences, for example wider DCMS, other departments, etcetera;
  • modified reports suitable to different audiences
  • additional materials to synthesise findings for dissemination, for example in this case potential for a ‘best practice in fund design’ or ‘learning for fund delivery’ or short, visually engaging reports for easy engagement
  1. This value does not include projects funded via onward grant giving activities. 

  2. The Maryland Scientific Methods Scale is used to measure the robustness of impact evaluation methods. The five-point scale ranges from 1, for evaluations based on cross-sectional correlations (treated vs untreated, or before-and-after for treatment group), to 5 for randomised control trials (randomisation into treatment which is generally thought to provide the highest internal validity)  2

  3. The number of interviews delivered are in line with the original plan for delivery with the exception of the external stakeholder group. Here the initial intention was to deliver 3-5 interviews, however this was reviewed because of the significant overlap with grantee stakeholders, leaving few qualifying organisations. Interviews with past Fund recipients were therefore prioritised for interviews instead.