Guidance

Planning an evaluation: evaluation in health and wellbeing

Helping public health practitioners conducting evaluations on how to plan them.

Introduction to planning an evaluation

Successful evaluation can only be achieved with careful planning. You should think through each stage of evaluation and consider:

  • what is the aim of the evaluation
  • who needs to be involved (this will include individuals who deliver or benefit from the service or intervention being evaluated, as well as the evaluation team and potential users of the evaluation findings)
  • what resources and skills are needed
  • what are the likely timescales
  • how will any findings be used to the service, and intervention or policy suggested by the evaluation, be communicated and made

Planning can identify problems that might arise. So these can be addressed before evaluation begins. In some cases, planning assessment may conclude that the evaluability is poor if, for example, there are not enough resources or difficulties with obtaining data.

Planning does not stop when the evaluation begins but is an ongoing process. Regular meetings are helpful to review how the evaluation is progressing in relation to the evaluation plan. At these meetings you can consider whether the plan needs to change and whether there are barriers that need to be addressed.

Planning an evaluation should take account of the aims and priorities of the organisations involved in delivering and funding the intervention (such as an NHS service, local authority or voluntary sector organisation). Evaluation also needs to be timely. For example, if the board of an organisation makes decisions or allocates resources at specific time points, then it is crucial to deliver evaluation findings before these decisions.

The following steps are recommended when planning an evaluation. Evaluations differ so some steps might be more or less relevant and may be needed in a different order.

  1. Determine who will be involved.
  2. Describe the intervention or programme.
  3. Define the evaluation questions and objectives.
  4. Agree the evaluation design and methods.
  5. Consider the context of the intervention.
  6. Consider how the evaluation findings will be disseminated and used.
  7. Prepare an evaluation plan or protocol.
  8. Set milestones manage time.
  9. Allocate resources.
  10. Establish management strategies.

Determine who will be involved

You will need to decide who will be involved in and undertake the evaluation. It can be an individual or group internal to the organisation who are implementing the intervention, an external evaluator, or some combination of the 2 working together.

Involve stakeholders

Stakeholders are those affected by the results of an evaluation. They may be involved at any stage and could include those providing funding, developing or implementing the intervention, supporting the evaluation, or using the evaluation findings. As well as those who represent service users, relevant community organisations and other agencies working in related areas.

Stakeholders clarity what will work in practice and where barriers may lie. If stakeholders are not involved in evaluation they may be resistant to recommendations for change following from the evaluation.

The following questions may help identify relevant stakeholders:

  1. Who is involved in the intervention, including staff, service users and funders?
  2. Who needs to be involved to carry out the evaluation?
  3. Who needs to be involved for any change to take place as a result of the evaluation?
  4. Who will be affected by any change stemming from an evaluation?

A range of stakeholders should be involved in initial evaluation discussions. Otherwise, the evaluation may only address the needs of only a few, usually the ones with the most power, and may miss important questions and issues. In some cases you might involve organisations that represent a stakeholder group.

For example, if you were evaluating services for older people, you could involve charities or lobby groups working on behalf of older adults who are familiar with their particular concerns or needs. Such organisations can consult with their members when appropriate or help you to engage suitable individuals to provide input at different stages.

Once you have identified stakeholders, consider their stage and level of involvement. You may need to think about whether stakeholders will be represented on a steering group or the evaluation team, whether they will attend stakeholder consultations or take part in interviews or focus groups as part of the evaluation.

Involving every stakeholder may not be realistic, and in some cases they may not want to be involved. Evaluators may need to spend some time establishing trust and building on existing connections to engage stakeholders. It is also important to involve stakeholders in the dissemination stages of the evaluation.

Build an evaluation team

Whether you commission an external evaluation or carry it out in-house, you will need an evaluation team. The size, membership and level of involvement of this team will depend on:

  • the type of evaluation that is being carried out
  • the timescale of the evaluation
  • the size, complexity and cost of the intervention to be evaluated
  • how important objectivity of the evaluation team is
  • who will use the findings of the evaluation and for what purpose

Some examples of people you might want to involve are:

  • members of the planning or management team from the organisation implementing the intervention
  • a practitioner or specialist with expertise in the area of public health that the intervention focuses on, representatives from other organisations or groups who may be directly affected by the intervention or findings from the evaluation (for example, a representative from the population targeted by the intervention, or someone from local government)
  • one or more people with the skills you will need to carry out the evaluation (perhaps a researcher, economist, academic with subject expertise, or evaluation expert)

Describe the intervention or programme

If it does not already exist, a clear and detailed description of the service, intervention or programme that you will evaluate should be developed. The description should clarify the overall purpose of the intervention (its aim) as well as short, medium, and long-term goals. The intervention or programme description should include:

  • what problem or need the intervention addresses
  • the nature and size of the problem or need
  • how, in principle, the intervention addresses the problem or meets the identified needs
  • which populations are affected by the problem and will be targeted by the intervention (you may want to highlight whether these are a priority group, for example, people living in deprived areas, older people or those with complex needs)
  • the aims and objectives of the intervention
  • the discrete activities, services or components that make up the intervention
  • the intervention’s potential capacity to bring about change, including factors that might influence it’s success or failure
  • the stage of development of the intervention (for example, if it is new, the purpose of the evaluation might be to develop or refine it)
  • how the intervention fits into any wider organisation or context
  • what the intervention should accomplish to be considered successful

Stakeholders should agree with the description and developing. It may require some discussion as different stakeholders might have different views on the purpose of the intervention, particularly if there have been changes in the intervention or context since it was established. It is also sensible to consider and discuss the potential for unintended consequences of the intervention and plan how the evaluation might assess these.

A ‘programme theory’ may be developed to explain how an intervention works, specifying how its activities or components contribute to a chain of effects that bring about its intended or actual impacts and outcomes. This can include both beneficial and detrimental effects. A programme theory can also highlight other factors that influence the impacts an intervention has, such as context and other initiatives being implemented concurrently (for example, policy changes). A diagram, often referred to as a logic model, can be used to represent a programme theory.

Define the evaluation questions and objectives

An evaluation usually addresses questions about whether and how the aims and objectives of an intervention were achieved. The evaluation questions are critical because they shape what data is needed and how they will be analysed.

Once the evaluation questions are defined, it is useful to formulate specific objectives for the evaluation to help structure the work programme and outputs. This involves specifying activities needed to answer the evaluation questions (for example, recruit participants, collect particular data). Evaluation objectives should be SMART (specific, measurable, realistic and time bound) and monitored as the evaluation proceeds.

Agree the evaluation design and methods

The design of an evaluation depends on the specified questions and the selected methods. Different methods are needed to address different questions. We discuss evaluation methods further in the methods section. The questions also determine the design of the evaluation. We have discussed 3 types:

  1. Outcome evaluation.
  2. Process evaluation.
  3. Economic evaluation.

Of course, an evaluation could include 2 or all 3 of these types.

Consider the context of the intervention

The wider context for the intervention and evaluation include local environmental, social and cultural factors. Consideration of these factors including relevant policies and targets at local and national level can help identify factors that may affect intervention implementation and effectiveness. The following questions may help identify relevant contextual factors.

  1. What are the policy drivers for the intervention?
  2. Are there related government or local targets?
  3. What other programmes or projects are operating in this area?
  4. Who are the target group for the intervention (for example, young people, expectant parents, older people) and what other initiatives are aimed at them?
  5. Are there social or cultural practices, or factors relating to age or gender, that are important to consider for your target group?
  6. Are there features of the local environment (such as recreational or shopping facilities, broadband speed, rural isolation, urban congestion, local history) that might affect your intervention?

Consider how the evaluation findings will be disseminated and used

Dissemination may be more than a single report or a presentation: a dissemination strategy can be designed to promote use of the findings, for example, to improve an intervention or change a service, through different channels. The nature of the dissemination strategy will depend on the field and what changes you want to make.

It could comprise an event or series of events at which presentations are made to main representatives from various stakeholders (for example, funding bodies, managers from the organisation implementing the intervention, service users and the wider community), and the implications of the evaluation findings are discussed.

Main aspects of the dissemination strategy should be agreed at the beginning of an evaluation and stakeholders can advise on dissemination approaches that are likely to be feasible and effective.

If people in your organisation have helped with the evaluation, it is also good to let them know what happened as a result to maintain good relationships and facilitate change. It is also good practice to communicate evaluation findings to participants or beneficiaries of an intervention, some of whom are likely to have given their time to supporting the evaluation, for example, by providing data.

Prepare an evaluation plan or protocol

An evaluation plan or protocol is a written document that describes how you will manage the evaluation. It clarifies the steps needed to assess the outcomes and processes of an intervention. The evaluation team and the stakeholders should agree on the contents of the evaluation plan. These usually include:

  • an overview of the intervention being evaluated
  • the purpose and scope of the evaluation
  • main evaluation questions
  • the type of evaluation needed
  • resources and expertise available or required to support the evaluation

An effective evaluation plan is a dynamic tool, or a ‘living document’, that should be updated on an ongoing basis to reflect changes and priorities over time.

Set milestones and manage time

It is important to outline a timetable for the evaluation which includes major milestones, such as:

  • obtaining any necessary ethical or other approvals
  • submitting an application for funding if necessary
  • completing recruitment of participants
  • start and end date of the intervention
  • time periods for data collection
  • data analysis
  • writing progress and final reports
  • disseminating findings at the end of the evaluation

You will need to consider how the timetable aligns with those of other relevant organisations, such as funders and/or monitoring organisations. It may also be important to think about any timetables affecting the implementation of an intervention (for example, school holidays, public holidays). The timetable should also include time for optimism bias, staff absence and other problems.

Allocate resources

When planning an evaluation, you will need to consider the resources including funds, time, staff capacity, skills, training, and opportunity costs (that is, what might be gained from alternative uses of the resources).

An assessment of the resource implications (whether potential savings or additional expenditure) as a result of the findings of the evaluation also needs to be made. Evaluability involves assessing whether the usefulness of the evaluation justify the allocated resources.

If you commission an external evaluator, you may be able to use your organisation’s procurement process. You need to be aware of policies, procedures and timelines related to this and plan ahead to ensure that you have access to all possible resources. For example, you may need to prepare an evaluation brief. A useful, short (for example, 2 to 4 pages) document that summarises the evaluation.

Internal costs may include office space and staff time. Evaluations involve spending time in planning and monitoring meetings, arranging and participating in interviews or focus groups, collecting and analysing data, and so on. Therefore, it is important to identify who will undertake each task, if they are available at the right time and how much of their time is needed. You should negotiate the division of labour or responsibilities as early as possible, so that sufficient time can be allocated to your evaluation.

Establish management strategies

Like any project, it is important to keep the evaluation on track and ensuring emerging issues are dealt with in a timely manner. Communicating well and early, within your own team and with any external teams such as stakeholders, a steering group or external evaluators is crucial.

Not all challenges can be anticipated in the evaluation initial evaluation. So it may need to revisited and revised. Priorities can also change once the evaluation is in progress, especially if it is conducted over a long time period. In these cases, it is important to document what was changed and why, and note any implications of these changes for evaluation objectives and usefulness.

References

The following resources, included in the guide, are helpful for finding out more about planning an evaluation. Several provide step-by-step guidance.

BetterEvaluation website

Centers for Disease Control and Prevention (2011): Developing an effective evaluation plan.

Centers for Disease Control and Prevention (1999): Framework for program evaluation in public health.

Centers for Disease Control and Prevention (2008): Introduction to process evaluation in tobacco use prevention and control.

Centers for Disease Control and Prevention (2010): Learning and growing through evaluation: state asthma program evaluation guide.

NCVO Charities Evaluation Services

Department for International Development (2013): DFID Evaluation Policy 2013.

European Monitoring Centre for Drugs and Drug Addiction (2010): Prevention and evaluation resources kit (PERK).

Health Canada: A guide for first nations on evaluating health programs.

HM Treasury (2011): The Magenta Book: guidance for evaluation.

Joseph Rowntree Foundation (2005): Evaluating community projects: a practical guide.

Medical Research Council Guidance (2021): Developing and evaluating complex interventions: new guidance.

Medical Research Council (2015): Process evaluation of complex interventions.

NHS Health Scotland (2003): LEAP for health: learning, evaluation and planning.

National Science Foundation, Directorate for Education and Human Resources, Division of Research: Evaluation and Communication.

National Science Foundation (2002): The 2002 user-friendly handbook for project evaluation.

Cavill, N, Roberts, K, Ells, L (2015): Evaluation of weight management, physical activity, and dietary interventions: an introductory guide.

Public Health England (2012): Standard evaluation framework for dietary interventions.

Public Health England (2012): Standard evaluation framework for physical activity interventions.

Public Health England (2012): Standard evaluation framework for weight management interventions.

United Nations Development Programme (2009): Handbook on planning, monitoring and evaluating for development results.

US Department of Health and Human Services (2006): Guide to analyzing the cost-effectiveness of community public health prevention approaches.

US Department of Health and Human Services (2010): The program manager’s guide to evaluation (second edition).

WK Kellogg Foundation (2017): Step-by-step guide to evaluation.

World Health Organization (2000): Workbook 1: planning evaluations.

Acknowledgements

This work was partially funded by the UK National Institute for Health Research (NIHR) School for Public Health Research, the NIHR Collaboration for Leadership in Applied Health Research and Care of the South West Peninsula (PenCLAHRC), and by Public Health England. However, the views expressed are those of the authors.

Written by Sarah Denford, Jane Smith, Charles Abraham, Krystal Warmoth, Sarah Morgan Trimmer and Margaret Callaghan.

Psychology Applied to Health, University of Exeter Medical School.

Published 7 August 2018