Guidance

Evaluation in health and well being: guidance summaries

Published 7 August 2018

This guidance was withdrawn on

For up-to-date guidance see Evaluation in health and wellbeing.

How this guide was produced

This guide was funded by Public Health England and the National Institute for Health Research School for Public Health Research.

The guidance and frameworks identified and the summaries for each were developed by Prof Charles Abraham, Dr Sarah Denford and Dr Margaret Callaghan; Psychology Applied to Health, University of Exeter.

How to use this guide

Each individual guide or framework has a summary covering:

  • background information, including lead author or organisation, date, target audience
  • themes
  • summary information - an overview of the guidance being reviewed
  • strengths
  • limitations
  • additional comments

Themes

Background or overview of evaluation:

  • assessing the evidence
  • evaluability
  • common challenges
  • policy and using theory

Pre-evaluation preparation:

  • developing a protocol
  • budgeting
  • contracting and communications
  • pilot testing
  • ethics
  • needs assessment
  • evaluation planning
  • logic modelling
  • stakeholder involvement

Evaluation process:

  • overview of evaluation process
  • defining questions
  • choosing outcomes
  • describing the intervention
  • design and methodology
  • data collection
  • data analysis and interpretation

Types of evaluation:

  • overview of types of evaluation
  • process evaluation
  • outcome evaluation
  • economic evaluation
  • community projects and fidelity

Additional support:

  • tools and toolkits
  • quality assurance
  • hiring an evaluator
  • training

It is possible to do a keyword search on the page using function F1.

BetterEvaluation

Background information

BetterEvaluation is a tool aimed at those new to evaluation. It provides a step-by-step guide to evaluation as well as links to training and events, case studies and evaluation materials.

Lead organisation: BetterEvaluation

Date: current, updated on ongoing basis

Themes

Evaluability, Developing a protocol, Contracting and communications, Ethics, Needs assessment, Evaluation planning, Logic modelling, Stakeholder involvement, Defining questions, Choosing outcomes, Describing the intervention, Research design and methods, Data collection, Data Analysis and interpretation, overview of types of evaluation, Process evaluation, Tools and toolkits, Training.

Purpose and utility of guidance

Support people throughout an evaluation process.

Target audience

Those who are new to evaluation although it could be usefully used by anyone who wants to plan an evaluation and by teachers.

Better evaluation is an international collaboration to improve evaluation practice and theory by sharing and generating information about options and approaches.

Summary information

This is a comprehensive tool and includes:

  • evaluation options: engaging stakeholders, recruiting staff, getting funding and so on.
  • discussion of different evaluation approaches – appreciative enquiry, case studies, democratic evaluation, horizontal evaluation
  • links to webinars, workshops, toolkits and events with further support
  • case studies from different parts of the world
  • links to new evaluation material

It provides detailed support on everything needed to support evaluation and assumes no previous research or evaluation knowledge including such topics as determining causal relationships and combining qualitative and quantitative data.

Strengths

Strengths include:

  • a clear and comprehensive guide which takes the reader step by step through the areas needed to carry out an evaluation from engaging stakeholders and considering ethics to developing capacity and meta-evaluations.
  • provides a tool which allows people to carry out an evaluation by making them consider the different stages and requirements
  • can be used by someone without evaluation experience
  • available in many languages
  • international collaboration
  • allows experts to add new content and develop aspects of the site

Limitations

Does not touch on economic evaluation, possibly as it is aimed at those who are new to evaluation and it might be difficult to include this. However, it would have been useful to have referred to this.

Additional comments

Allows people to join and become part of an evaluation community. Is updated by members regularly and new material is added (therefore is worth monitoring).

Well-being evaluation tools: a research and development project for the Big Lottery Fund handbook

Review details

The Big Lottery fund designed Well-being evaluation tools: a research and development project for the Big Lottery Fund handbook for recipients of the English Big Lottery wellbeing fund. It is designed to support recipients of this fund

Lead author/organisation: Big Lottery Fund (England)

Date: January 2009. Length: 18 pages

Purpose and utility of guidance

The handbook is intended for use as part of the Big Lottery wellbeing programme and the two partners involved in its changing spaces programme. It is a practical guide to measure the outcomes of these well-being projects and provides guidance on the evaluation tools which were developed to be used for recipients of the lottery grants.

Themes

Evaluation Processes: Choosing outcomes, Research Design & Methods, Tools & Toolkits.

Primary target audience

Portfolio leads, award partners, project managers and project workers who are funded by the Big Lottery.

Contextual information

The Wellbeing Programme is a £160 million Big Lottery Fund programme supporting projects across the country working on these themes:

  • healthy eating
  • physical activity
  • mental well-being

The programme is structured into 17 portfolios, each holding a selection of projects addressing at least 1 of the above themes. Portfolios have received funding for 3 to 5 years with most commencing activities in early 2008.

Summary/overview

This is a handbook for recipients of the Big Lottery fund’s wellbeing fund designed to support them to evaluate the impact of their funded work. As there are a range of different types of organisations and projects, it provides guidance on when a consistent approach is required across all projects/portfolios involved in the national evaluation and prompts users to think about how they might want to use the tools for their own specific needs.

It describes a range of evaluation tools all questionnaires (including a ‘core’ questionnaire which all projects should use) designed to measure:

  • distance travelled
  • who should use the evaluation tools
  • sample sizes required
  • how to select the right tools for your project (using a clear flow chart)
  • access these tools and fit them with your evaluation plan,
  • how and when to use the evaluation tools (and support for this)
  • how to access data and discusses
  • ethical approval, consent and confidentiality

It includes:

  • introduction
  • the evaluation tools: specific to evaluation of Big Lottery projects.
  • selecting the right tools for your project including a useful flow chart for choosing the right tools
  • using the evaluation tools
  • accessing and making use of the data
  • ethical approval, consent and confidentiality
  • further information and support including online tools and contacts

Strengths

Strengths include:

  • clear and easy to use
  • written specifically for the needs of its audience
  • useful flowchart for choosing the right tools
  • emphasis on effective use of resources and on the recipients.
  • sources of further support offered by the Big Lottery national evaluation team including a web page with tools and support from the rapid response team and specific contact details
  • gives contact details for economic evaluation support
  • alongside local evaluations there is a national evaluation focusing on the economics of these interventions
  • can be converted to other file formats
  • provides links to related resource for registered users
  • outcomes focused

Limitations

Very specific to recipients of Big Lottery Funding and not a great deal of use for other programmes due to:

  • quite rigid methods
  • no mention of stakeholder engagement
  • does not discuss whether outcomes are cost-effective or could be better achieved by another measure
  • the tools it includes for evaluation are all questionnaires containing closed questions (designed for consistency but may not reflect the real work of the projects)
  • does not explore what to do about people who drop out of the project

Additional comments

There is also a linked national evaluation which will use different research methods.

Capacity for Health (C4H) monitoring and evaluation resource

The Monitoring and evaluation (ME) resource is a library funded by the Centres for Disease Control and Prevention to support capacity building in HIV projects. It includes literature, reports and webinars. These are divided into 3 categories: infrastructure and sustainability evidence-based information and strategies and monitoring and evaluation. Each section has associated resources.

Review details

Lead author/organisation: Capacity for Health (C4H)

Date: Current 2017

Purpose and utility of guidance

A resource library funded by the CDC.

Themes

Background to evaluation: Overview of evaluation. Pre-evaluation Preparatory Work: Needs assessment, Evaluation planning, Logic modelling. Evaluation processes: Describing the intervention, Data collection, Data analysis and interpretation. Additional support: Quality assurance

Primary target audience

Community-based organisations working in HIV prevention (unstated but focused on USA).

Contextual information

C4H is an organisation which provides free support to Health Departments across the US and affiliated territories. Their focus is on AIDS and HIV prevention and they tailor the support that they offer to specific circumstances.

Summary/overview

The library contains a range of resources to support capacity building including literature, reports and webinars. It divides these resources into 3 categories:

  • organizational infrastructure and program sustainability (running an organisation)
  • evidence-based intervention and public health strategies (strengthening an organisations’ abilities to implement effective intervention)
  • monitoring and evaluation (measuring process and outcome and understanding strengths and areas for improvement)

All of these areas have links to resources. It also provides links to support service, the C4H programme website and index of further tools.

Evaluation specific topics include:

  • evaluation overview
  • types of evaluation
  • evaluation planning
  • needs assessment
  • logic models
  • data collection methods
  • data management and analysis
  • sharing findings
  • evaluation capacity
  • participatory evaluation
  • quality assurance (QA)

Strengths

Strengths include:

  • clear and easy to access, search and read
  • resources that it links to are free
  • specifies a clear purpose
  • provides a link to a wide range of resources

Limitations

Limitations include:

  • the title does not make the purpose of the webpage clear and that the resources focus on HIV work in the US
  • uncertain how these resources were chosen: if it was systematic and if there was quality control
  • not specifically evaluation focused although evaluation is included
  • best used by someone with evaluation experience and some idea what they are looking for as someone else could get lost amongst the different resources
  • not clear how this relates to other work in HIV and in evaluation in the US C4H seems to be a programme specifically for those working within Asian and Pacific Islanders community in America (this is not evident on the main web page)

Centres for Disease Control and Prevention (CDC): a framework for program evaluation in public health

A framework for program evaluation in public health is for those with some evaluation knowledge to evaluate the effects of public health actions and to link evaluation with program management.

Review details

Lead author/organisation: Centres for Disease Control and Prevention (CDC)

Date: 1999

Purpose and utility of guidance

Use this if you want to evaluate the effects of public health actions.

Themes

Pre-evaluation preparatory work: Stakeholder involvement. Evaluation processes: Overview of evaluation, Describing the intervention, Research design and methods, Data collection, Data analysis and interpretation.

Additional Support: Quality Assurance.

Primary target audience

Public health professionals.

Contextual information

In May 1997, the CDC director and executive staff decided that there was a need to produce a basic, organisational framework for evaluation in public health, a need to combine evaluation with program management, and a need for evaluation studies that demonstrate the relationship between program activities and prevention effectiveness. CDC convened an evaluation working group, charged with developing a framework that summarises and organises the basic elements of program evaluation.

Summary/overview

The framework is described as a practical, non-prescriptive tool, designed to summarise and organise essential elements of program evaluation.

The framework comprises 6 steps in program evaluation practice and 4 standards for effective program evaluation.

Adhering to the steps and standards of this framework will allow an understanding of each program’s context and will improve how program evaluations are conceived and conducted. Furthermore, the framework encourages an approach to evaluation that is integrated with routine program operations. The emphasis is on practical, ongoing evaluation strategies that involve all program stakeholders, not just evaluation experts.

Specifically, the main purpose of the framework is to:

  • summarise and organise the essential elements of program evaluation
  • provide a common frame of reference for conducting effective program evaluations
  • clarify steps in program evaluation
  • review standards for effective program evaluation;
  • address misconceptions about the purposes and methods of program evaluation

These 6 connected steps and 4 standards provide a systematic way to approach and answer the questions below:

  • what is the program and in what context does it exist?
  • what aspects of the program will be considered when judging program performance?
  • what standards (primarily, type or level of performance) must be reached for the program to be considered successful?
  • what evidence will be used to indicate how the program has performed?
  • what conclusions regarding program performance are justified by comparing the available evidence to the selected standards?
  • how will the lessons learned from the inquiry be used to improve public health effectiveness?

The steps:

  • engage stakeholders
  • describe the programme
  • focus the evaluation design
  • gather credible evidence
  • justify conclusions
  • ensure use and share lessons learned

The standards:

  • utility
  • feasibility
  • propriety
  • accuracy

Strengths

Strengths include:

  • the framework summarises and organises essential elements of program evaluation
  • it guides users in selecting strategies that are useful, feasible, ethical and accurate
  • provides guidance on conducting practical evaluation, within the confines of resources, time, and political context
  • the website provides links to numerous links and information related to the framework
  • the framework emphasises considerable stakeholder involvement

Limitations

Limitations include:

  • the focus is on practical evaluation, rather than high quality
  • it assumes basic knowledge of evaluation
  • the framework itself may be of little use without all the supporting documents.
  • the volume of information available may be overwhelming

CDC, Introduction to program evaluation for public health programs: a self-study guide

CDC, Learning and growing through evaluation: state asthma evaluation guide

TB program evaluation handbook: Introduction to program evaluation

Introduction to program evaluation

Adolescent and school health: Evaluations of innovative programs

Program performance and evaluation office (PPEO): a framework for program evaluation

Program performance and evaluation office (PPEO): CDC Evaluation Resources

Centres for Disease Control and Prevention (CDC): developing an effective evaluation plan

Part I of Developing an effective evaluation plan defines and describes how to write an effective evaluation plan going through the 6 steps from engaging stakeholders to planning for conclusions.

Review details

Type of document/accessibility: online PDF

Date 2011. Length: 115

Lead author/organisation: National Center for Chronic Disease Prevention and Health Promotion. Centers for Disease Control and Prevention’s (CDC’s) Office on Smoking and Health (OSH) and Division of Nutrition, Physical Activity, and Obesity (DNPAO)

Purpose and utility of guidance

Part of a series of workbooks which are intended to offer guidance and facilitate capacity building on a wide range of evaluation topics. They can be adapted to meet specific program’s evaluation needs. It is not a complete ‘how to’ guide but should be used alongside other evaluation resources and some of these are listed.

Themes

Pre-evaluation preparatory work: Contracting and communications, Evaluation planning, Logic modelling, Stakeholder involvement. Evaluation Processes: Overview of evaluation process, Defining questions, Choosing outcomes, Describing the intervention, Research design and methods, Data collection. Types of evaluation: Economic evaluation. Additional support: Tools and toolkits, Quality assurance.

Primary target audience

Program managers and evaluators working in any public health program.

Contextual information

CDC is the US public health agency responsible for the prevention of chronic disease.

Summary/overview

This workbook applies the CDC Framework for Program Evaluation in Public Health which sets out a 6-step process for the decisions and activities involved in conducting an evaluation.

Part I of this workbook defines and describes how to write an effective evaluation plan going through the 6steps.

The process of participatory evaluation planning

  1. Engage stakeholders.
  2. Define the purpose of the evaluation.
  3. Describe the program: shared understanding, narrative description, logic model, stages of development, focus the evaluation and develop the questions.
  4. Set the budget and resources.
  5. Plan the gathering of evidence, methods, credible evidence, measuring, data sources and methods, roles and responsibilities.
  6. Plan for conclusions: Planning for dissemination and sharing of lessons learned, communication, dissemination plans and ensuring use.

Part II of this workbook includes exercises, worksheets, tools and a resource section to facilitate program staff and evaluation stakeholder workgroup (ESW) thinking through the concepts presented in Part I of this workbook.

There are exercises and tools for each of the steps which allow the user to put all of the steps into practice in their own project by choosing the worksheets and tools which are relevant to them.

Strengths

Strengths include:

  • clearly written and comprehensive
  • clear about audience and aim
  • takes readers step by step through the evaluation process and provides tools at each stage
  • includes budgeting
  • includes planning for change
  • refers to further resources

Limitations

Limitations include:

  • could usefully have a discussion of policy
  • does not mention economic effectiveness of programme
  • would be improved by more worked examples

The CDC framework for evaluation

Additional comments

Might be useful to consider all the CDC workbooks together that were accessed in this search.

Centres for Disease Control and Prevention (CDC): state asthma program evaluation (1)

The Learning and growing through evaluation: state asthma program evaluation guide module 1 workbook applies the CDC framework for program evaluation in Public Health which sets out a 6-step process for the decisions and activities involved in conducting an evaluation.

Review details

Date: April 2010. Length: 149

Lead author/organisation: CDC National Center for Chronic Disease Prevention and Health Promotion

Themes

Pre-evaluation preparatory work: Contracting and communications, Pilot testing, Evaluation planning, Stakeholder involvement. Evaluation processes: Overview of evaluation, Defining questions, Choosing outcomes, Research design and methods, Data collection, Additional Support: Tools and toolkits, training.

Purpose and utility of guidance

This is part of a series of evaluation documents. The first module in this series focuses on planning for evaluation. The second module builds on this to support implementing planning. Module 3 applies these tools to the evaluation of state asthma program partnerships.

Primary target audience

This guide is intended to be used by state and territorial public health departments (SHDs) that are receiving CDC funding for state asthma programs.

Contextual information

The CDC is the nation’s health protection agency. The goal of the CDC is to protect America from health, safety and security threats.

This series of guides was developed for the National Asthma Control Program, relates to the CDC framework and applies this framework to evaluation of asthma programs.

Summary/overview

The guide begins with an overview of the CDC framework for evaluation of public health programs.

Chapter 2 provides detail on how to develop a strategic evaluation plan (primarily, the overview of multiple evaluations that will occur over a specific time period). This includes:

  • step A: establishing an evaluation planning team
  • step B: describing the program
  • step C: prioritising program activities for evaluation (for example, cost, sustainability)
  • step D: considering evaluation design elements
  • step E: developing a cross-evaluation strategy
  • step F: promoting use through communication (such as, communicate the findings to improve program development)
  • step G: writing and revising your strategic evaluation plan

Chapter 3 provides detail on how to develop an individual evaluation plan (as included in the larger, more comprehensive strategic evaluation plan). This includes:

  • step 1: engage stakeholders
  • step 2: describe what is being evaluated
  • step 3: focus the evaluation design
  • step 4: gather credible evidence
  • step 5. justify conclusions
  • step 6. ensure use of evaluation findings and share lessons learned

Strengths

Strengths include:

  • good clear overview of the use of the framework applied to an evaluation of an asthma program
  • informs readers how each section relates to the CDC framework
  • provides articles to aid understanding
  • the appendices contain a number of useful toolkits to support planning evaluations

Limitations

Limitations include:

  • does not discuss evaluation theory
  • examples largely relate to asthma
  • lack of information on economic evaluation

Implementing evaluations: learning and growing through evaluation module 2

Implementing evaluations: learning and growing through evaluation module 3

Additional comments

Part of a series of modules and should be read in that context.

Centres for Disease Control and Prevention (CDC): state asthma program evaluation (2)

The first section Implementing evaluations: learning and growing through evaluation module 2 identifies methods for moving from planning to implementation by using information from evaluation planning to inform evaluation implementation. It also describes the difference between various planning and implementation teams as well as appropriate team members.

Review details

Type of document/accessibility: online PDF

Date: August 2011. Length: 135

Lead author/organisation: National Centre for Chronic Disease Prevention and Health Promotion

Themes

Pre-evaluation preparatory work: Budgeting, Contracting and communications, Evaluation planning, Stakeholder involvement. Evaluation processes: Overview of evaluation, Defining questions, Choosing outcomes, Describing the intervention, Research design and methods, Data collection, Data analysis and interpretation. Additional support; Tools and toolkits, Quality assurance, Hiring an evaluator, Training.

Purpose and utility of guidance

This is part of a series of evaluation documents. The first module in this series focuses on planning for evaluation. This module builds on that to support implementing planning. Module 3 applies these tools to the evaluation of state asthma program partnerships.

Primary target audience

Anyone involved in implementing evaluations

Contextual information

CDC is the US public health agency responsible for the prevention of chronic disease

Summary/overview

The first section of this module (2) identifies methods for moving from planning to implementation by using information from evaluation planning to inform evaluation implementation. The second chapter takes trainees through:

  • different stages of an evaluation
  • stakeholders
  • developing an evaluation process
  • training in data collection
  • monitoring and ongoing communication
  • developing an action plan, linking back to the original plan
  • successfully implementing change

The document covers strategies for successful implementation:

  • working with stakeholders
  • developing a process for managing the evaluation
  • pilot testing
  • training for data collection
  • getting a team together
  • monitoring progress and promoting ongoing communication
  • Interim report and dissemination
  • developing an action plan
  • lessons learned, linking back to strategic Evaluation

Appendices are included on:

  • meeting evaluation challenges
  • evaluation anxiety
  • common evaluation designs
  • budgeting for evaluation
  • evaluation management toolkit
  • gathering credible evidence
  • training and supervising data collectors
  • effective communication and reporting
  • developing an action plan

Strengths

Strengths include:

  • builds on previous document “planning for evaluation” and takes readers step by step through planning to implementing evaluations.
  • could be used on its own to implement evaluation
  • emphasises engaging the right people at the start
  • stresses that an evaluation plan is an implementation plan and focuses on action as a result of the evaluation
  • relates evaluation to general project management drawing out similarities.
  • the appendices contain a number of useful toolkits to support implementation

Limitations

Limitations include:

  • would be useful if it had stated where it was in the series of modules although it does seem that it is designed to be accessed through the main page which provides context
  • this is for evaluating asthma programmes and it would be helpful if this was made clear in the title. Perhaps if it were read within the context of the other modules it would be clearer
  • there are aspects of evaluation missing but they may well be part of other training modules
  • there is no mention of cost-effectiveness or other economic considerations
  • although the plan does consider stakeholders it focuses more on the mechanic of keeping them involved rather than the ethos and the need for them to help to define the goals

Learning and growing through evaluation: state asthma program evaluation guide

Evaluating partnerships: learning and growing through evaluation module 3

Additional comments

Part of a series of modules and should be read in that context.

Centres for Disease Control and Prevention (CDC): process evaluation in tobacco use prevention and control

Introduction to process evaluation in tobacco use prevention and control provides process evaluation technical assistance.

This handbook defines process evaluation and describes the:

  • rationale
  • benefits
  • main data collection components
  • program evaluation management procedures

It also discusses how process evaluation links with outcome evaluation and fits within an overall approach to evaluating comprehensive tobacco control programs. This manual complements other CDC initiatives which concentrate on outcome evaluation.

Review summary

Type of document/accessibility: online PDF

Date: February 2008

Length: 71 pages

Lead author/organisation: Center for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health

Themes

Evaluation processes: Overview of evaluation process. Types of evaluation: Process evaluation

Purpose and utility of guidance

Intended to provide process evaluation technical assistance.

Primary target audience

Office of Smoking and Health staff, grantees and partners.

Contextual information

CDC is the US public health agency responsible for the prevention of chronic diseases.

Summary/overview

This manual:

  • provides a framework for understanding the links between inputs, activities, and outputs and for assessing how these relate to outcomes
  • can assist state and federal program managers and evaluation staff with the design and implementation of process evaluations for evidence of progress in tobacco control

It gives general principles which can be adapted to be used in different circumstances.

Introduction to process evaluation in tobacco use prevention and control:

  • the planning/program evaluation/program improvement cycle
  • distinguishing process evaluation from outcome evaluation (outcome evaluation is about the results of an intervention and process about what is done to achieve these results)

Purposes and benefits of process evaluation:

  • definition of process evaluation and the scope of tobacco control activities (what level of information needs to be collected to answer the question)
  • purposes of process evaluation and monitoring, improvement, effective program model and accountability
  • users (those affected by the programme, those who manage and staff it and those who have power over it) and uses of process evaluation information (how it is helpful to understand outcomes)

Information elements central to process evaluation:

  • indicators of inputs/activities/outputs
  • comparing process information to performance criteria

Managing process evaluation (10-step process divided into 4 stages):

  • groundwork
  • formalisation (such as developing the proposal and tools)
  • implementation
  • utilisation

Conclusion:

  • using process evaluation in conjunction with outcome evaluation you have a better way to focus your time and resources
  • appendices which include framework and standards for evaluation

Strengths

Strengths include:

  • clear step by step approach supported by examples and appendices for further information
  • an overarching guide rather than a step by step approach
  • useful to focus on process evaluation
  • includes a range of case examples which show the use of process evaluation in practice
  • discusses the protection of participants

Limitations

Limitations include:

  • focus only on process evaluation
  • focus on tobacco- but could be used in other areas.
  • no economic considerations

Additional comments

Read in conjunction with other CDC work on evaluation and in particular that on outcome evaluation which this is designed to complement.

Those mentioned are important outcome indicators for evaluating comprehensive tobacco control programs and Introduction to program evaluation for comprehensive tobacco control programs.

Centres for Disease Control and Prevention (CDC): Program Performance and Evaluation Office (PPEO)

The CDC’s Program Performance and Evaluation Office (PPEO) - program evaluation resources provides support for those conducting program evaluations of CDC priorities. The website sets out a clear framework for program evaluation which is in line with other such evaluation frameworks. It takes readers very clearly through each step of the framework describing what needs doing at that step and how to do it. It links to documents which provide more details on these steps (such as engaging stakeholders, describing the programme, focusing the evaluation and gathering evidence all the way through to writing reports and disseminating).

Review details

Type of document/accessibility: online webpage

Lead author/organisation: Centre for Disease Control and Prevention (CDC)

Date: regularly updated

Themes

Evaluation processes: Overview of evaluation process.

Purpose and utility of guidance

To support people to conduct program evaluation.

Primary target audience

Those working in the field of evaluating programs that aim to prevent diseases. This includes evaluators, researchers and policy makers whether they are new or experienced. It is aimed at a US audience but could be usefully used by others.

Contextual information

The role of the CDC is to protect American citizens from health, safety and security threats both within and outside the US. In order to do this, it conducts research and provides health information to protect people from health threats as well as responding to new threats.

Summary/overview

This part of CDC’s work focuses on program evaluation to improve public health and by making evaluation a routine part of work to improve program effectiveness:

  • sets out a clear framework for program evaluation which is in line with other such evaluation frameworks and takes readers very clearly through each step of the framework describing what needs to be done at each step and how to do it
  • links to documents which provide more detail on these steps (such as engaging stakeholders, describing the program, focusing the evaluation and gathering evidence all the way through to writing reports and disseminating)
  • provides a good overview of standards and how they are used in evaluation
  • outlines learning from evaluation and using results in practice and guidance on recruiting an evaluation team.
  • provides a range of documents which presents the evaluation framework in different ways for different audiences
  • provides links to a range of self-study guides for program evaluation
  • provides links to a range of evaluation resources

The resources which are referred to are detailed and would allow a reader to get an excellent overview and understanding of evaluation.

Strengths

Strengths include:

  • resource that could be used by new and experienced evaluators
  • the information that it provides is useful and more comprehensive than similar guides
  • the layout of the information means that a reader can get an overview of evaluation and then focus on areas where they wish to know more or where there are gaps in their knowledge
  • even those with significant experience could use this usefully
  • clear and comprehensive
  • easy to navigate
  • provides links to evaluation guides which describe aspects of evaluation in depth
  • particularly useful information on who to include in an evaluation team
  • has clear contact details for further information
  • has a search feature

Limitations

Limitations include:

  • US focused
  • would be useful if it had set out who the target audience is more clearly
  • nothing on economic evidence or comparing programs

Additional comments

There are a number of papers and web pages about evaluation from CDC and it might be worth considering them together.

Provides links to a range of self-study guides.

NCVO Charities Evaluation Services

NCVO Charities Evaluation Services supports charities to evaluate programmes. It provides information on how they can help, the training, events and consultancy service that they provide and about the quality mark for charities. It provides links to support on different stages of evaluation and offers a consultancy service.

Review details

Type of document/accessibility: online webpage

Lead author/organisation: NCVO Charities Evaluation Services

Date: current

Themes

Pre-evaluation preparatory work: Budgeting, Evaluation planning, Logic modelling. Evaluation processes: Choosing outcomes, Research design and methods, Data collection. Additional support: Training.

Purpose and utility of guidance

This webpage explains what CES is, what services the organisation offers and how to access their consultancy and training services. It is also a resource on self-evaluation for voluntary sector organisations and their funders.

Finally, it links to information on PQASSO, the quality assurance system that CES developed specifically for the voluntary sector. The PQASSO pages are now housed on the NCVO website (see below). Since this has been reviewed the NCVO and PQASSO team have been restructured.

Primary target audience

People working in the charitable and voluntary sector, and their funders, who want to improve their evaluation, with a focus on outcomes and impact.

Contextual information

CES was established in 1990. It works with voluntary organisations to support and strengthen their ability to carry out evaluation with the aim of making charities more effective. In November 2014 CES merged with NCVO.

Summary/overview

This webpage provides links to sources of support for evaluation.

It states:

  • how CES can help
  • what training and events they offer and the consultancy service they provide
  • how they work with funders and charities

The site provides links to useful tools which support different stages of evaluation including:

  • planning
  • assessing baseline information and progress towards target
  • developing outcomes and indicators
  • data collection tools and related publications and research
  • listing evaluation tools available within the sector

The site provides links to reports, research and examples of evaluations in practice.

Strengths

Strengths include

  • clear and easy to use site
  • provides evaluation support for those working in the voluntary sector and their funders
  • has links to consultancy, tools and training
  • has a black and white accessible version (although as it is for charities, other types of access would have been expected

Limitations

Limitations include:

  • little mention of economic evaluation or showing cost-effectiveness
  • does not go into a great deal of detail on evaluation though the links provide more depth and this is explored further in tools and resources

Additional comments

CES merged with NCVO in November 2014.

European Centre for Disease Prevention and Control: evidence based methodologies for Public Health

Evidence-based methodologies for public health - how to assess the best available evidence when time is limited and there is lack of sound evidence supports professionals working in public health or communicable diseases at a time when it is difficult both to provide support and study the disease, for example after a sudden outbreak. It focuses on using methods from evidence-based medicine.

Review details

Type of document/accessibility: online PDF

Date: September 2011. Length: 67

Lead author/organisation: European Centre for Disease Prevention and Control

Themes

Background to evaluation: Assessing the evidence, Evidence-based medicine. Pre-evaluation preparatory work: Ethics, Stakeholder involvement. Additional support: Tools and toolkits.

Purpose and utility of guidance

To support those who give advice on evidence for public health in communicable diseases where:

  • evidence is uncertain
  • the situation is complex
  • there is short notice to help address the problem that the epidemiology of communicable disease can only be studied during an outbreak and therefore time is short
  • patient care is required at the same time
  • there is a need to prevent the disease being spread and the situation is changing on an ongoing basis

Primary target audience

Public health professionals and policymakers who evaluate evidence in order to give advice in public health for communicable diseases

Contextual information

The European Centre for Disease Prevention and Control (ECDC) was established to enhance the capacity of the European Community to protect human health through the prevention and control of human diseases - to identify, assess and communicate current and emerging threats by communicable diseases.

Summary/overview

This report focuses on how to use methods from evidence-based medicine in general to support evidence-based public health medicine specifically in the field of communicable disease.

It describes in detail the principles of evidence-based medicine and the specific challenges associated with using this in the field of public health (where there is less evidence on the outcomes of interventions in general and where it is more difficult to use systematic reviews and randomised control trials). It discusses the usefulness of using grading tools for grading evidence in the field of public health.

In accordance with the aims, the document is presented in 4 main sections:

  • how to give evidence-based guidance when evidence is scarce and time is limited
  • the usefulness of using evidence-based tools for grading evidence in the field of communicable diseases
  • assessing the quality of guidelines for preventing and controlling communicable diseases
  • using consensus methods for decision making

With regard to giving guidance when resources are scarce, the document presents a 5-stage framework for rapid risk assessments. This includes a preparatory phase and further stages of:

  • risk detection/verification
  • assessment of the risk
  • development of the advice
  • implementation and evaluation

Practical tools and templates for each stage are also presented, and the importance of being prepared and having tools at hand when an outbreak occurs is underlined.

The usefulness of evidence-based methods and grading tools are explored in the next section. A variety of methods for reporting, assessing and grading evidence are identified and the applicability of these tools in a public health setting is discussed. Many tools required to produce evidence-based advice already exist but there is a need to further develop instruments and checklists for some of the study designs relevant to public health.

For assessing the quality of guidelines in relation to infectious diseases, an evaluation of the AGREE II tool is presented. The importance of the different domains and items is discussed and some additional criteria for communicable diseases guidelines are proposed.

In the final section, the document suggests that consensus methods can be used both to evaluate the evidence, improve the balance of subjective interpretation of evidence from systematic reviews and develop best available expert judgements in settings with lack of evidence. Consensus methods can be applied by members of a guidance development group and as a method to facilitate implementation in hearing processes among stakeholders. The importance of transparency in public health decision-making, the role of experts and how to apply different consensus methods under different timelines are discussed.

Strengths

Good guide to using evidence assessment methods for public health.

Limitations

Not about evaluation but about assessing evidence for public health.

European Evaluation Society

European Evaluation Society is for professionals working in evaluation in Europe allowing them to share knowledge and get information about conferences, jobs, among others.

Review details

Type of document/accessibility: webpage

Lead author/organisation: European Evaluation Society

Date: 2016

Themes

Additional support: Training.

Purpose and utility of guidance

This website aims to bring academics, policy makers and practitioners in different geographical and topic areas together to encourage knowledge exchange, good practice dissemination, professional co-operation and bridge building.

The society has 4 strategic aims:

  • working with and supporting the development of evaluation societies within and outside Europe
  • thematic working groups (TWGs)
  • capacity development and professionalisation.
  • improving the European Evaluation Society (EES) financial situation

Primary target audience

Academics, policymakers and practitioners working in evaluation in Europe and beyond.

Contextual information

The European Evaluation Society promotes the theory, practice and use of high-quality evaluation in Europe and beyond.

Summary/overview

This web page:

  • explains what the European Evaluation Society is and what its aims are
  • describes its partners and their work
  • provides information on conference and training events and the material at these events
  • provides information on vacancies and tenders in evaluation
  • has an interactive blog and discussions
  • provides links to other resources including glossaries, standards, journals, evaluation tools and libraries on evaluation throughout the world

Strengths

Strengths include:

  • Europe-wide
  • single website for jobs, training, conferences, problem-solving etc.
  • a forum for users to discuss problems
  • website clear and easy to navigate

Limitations

Limitations include:

  • some web links are broken
  • aimed at experienced evaluators
  • does not explain what evaluation is (but aimed at a professional audience working in evaluation)

European Monitoring Centre for Drugs and Addiction: PERK

Prevention and evaluation resources kit (PERK): a manual for prevention professionals is for evaluation in the area of substance misuse in the EU. It is a resource kit for professionals working in this area and provides support at all stages as well as tips and documents.

Review details:

Type of document/accessibility: online PDF

Date: 2010. Length: 100 pages

Lead author/organisation: European Monitoring Centre for Drugs and Addiction

Themes

Background to evaluation: Overview of evaluation, Assessing the evidence, Evidence-based medicine. Additional support; Tools and toolkits.

Purpose and utility of guidance

This resource kit compiles basic, evidence-based prevention principles, planning rules and evaluation tips, documents and references for further support.

Primary target audience

Professionals working to prevent substance misuse in the EU:

  • prevention policy planners, for example, by providing information on which strategies are effective or on how to determine whether a project (proposal) is sound and well designed
  • prevention professionals and project developers, through the provision of background literature, theories, references and evaluation tools

The European Monitoring Centre for Drugs and Drug Addiction (EMCDDA) is an EU decentralised agency based in Lisbon. It is the central source of comprehensive information on drugs and drug addiction in Europe. It collects, analyses and disseminates factual, objective, reliable and comparable information on drugs and drug addiction in order to provide an evidence-based picture of the drug phenomena at the European level.

Summary/overview

PERK believes that evaluation should be considered at the planning and prevention stage.

Planning: step by step through the development of an intervention and the knowledge base in prevention. Ideas can be added or revised as the project progresses depending on resources and setting.

A compilation of materials, sources and instruments to support the setting up and evaluating of prevention interventions. These were based on training sessions throughout Europe to find out what prevention professionals want. This includes models, theories and evaluation principles: not only about elements that work, but also about what is popular but does not work.

Emphasis on evidence base and theoretical underpinning rather than opinion and perspective.

Project leaders guided first to carry out a needs assessment then to use a logic model to development prevention interventions so that the objectives, hypothesis, content and indicators logically build on each other are relevant and address the problem.

Each step in the evaluation begins with a description of the theory, then the process and a real-life example.It uses real-life examples to illustrate theory.

Contains a resource kit with materials that have already been used successfully.

Strengths

Strengths include:

  • evidence-based source for EU
  • useful and clear step by step approach
  • makes use of existing material which has been used in practice in the EU and US
  • a related website which provides further resources and can be adapted as other material becomes available
  • illustrates theory with real examples
  • mentions cost effectiveness and efficiency

Limitations

Limitations include:

  • not enough discussion on cost-effectiveness
  • it is specifically substance abuse prevention though could be usefully used with other projects

Evaluation Support Scotland (ESS)

Evaluation Support Scotland works with third sector organisations and funders to support and enable them to measure the impact of activities. The Evaluation Support Scotland webpage provides links to evaluation topics including how to budget. Practical tools are also provided. It provides guides for different types of charities and offers training.

Review details

Type of document/accessibility: website

Lead author/organisation (and source): Evaluation Support Scotland

Date: 2005 (ongoing)

Themes

Pre-evaluation preparatory work: Budgeting, Logic modelling. Evaluation processes: Defining questions, Choosing outcomes, Research design and methods, Data collection, Data analysis and interpretation. Additional support: Tools and toolkits, Hiring an evaluator, Training.

Purpose and utility of guidance

This resource is for third sector organisations and funders.

Primary target audience

Those funding or conducting self-evaluation within third sector organisations.

Contextual information

ESS supports third sector organisations and funders to be better at measuring their impact, reporting on the difference they make, to deliver better services.

Summary/overview

ESS website has free resources, tools, thematic guides, case studies and reports to help third sector organisations and funders to support them to evaluate and learn from the evidence.

The website includes a series of downloadable ESS support guides on a number of evaluation topics including:

  • clarifying aims, outcomes, and activities
  • developing a logic model
  • developing and using indicators
  • using interviews and questionnaires
  • visual approaches
  • using technology
  • storing information
  • analysing information
  • writing case studies
  • report writing
  • using qualitative data
  • using what you learn from evaluation
  • getting the best from an external evaluation
  • how to budget self-evaluation

A number of practical tools are also provided, including templates, route maps, and workbooks to support different stages of evaluation.

Guides organised by themes include:

  • guidance and tools for evaluating arts and sport, children and young people
  • community and learning development
  • environment
  • equalities
  • health and social care
  • homelessness
  • international development
  • partnership
  • quality
  • social enterprise
  • substance misuse
  • volunteering
  • criminal justice

Case studies of evaluations (and advantages and disadvantages) are presented. Reports of such evaluations are also available.

The final section on the website provides information on external consultants as well as a link to a database of external consultants.

An online training module ‘Getting the best from external evaluation’ is available.

Evaluation skills workshops are delivered across Scotland and thematic learning programmes. Resources produced from these programmes are available on the website.

Strengths

Strengths include:

  • provides good information and resources about how to self-evaluate for practitioners working in the third sector and funders
  • tools, such as flowcharts, templates, and route maps are included
  • comprehensive in scope of topics covered
  • guidance documents linked by theme are available
  • case studies of good practice are included to demonstrate evaluation in practice

Limitations

Limitations include:

  • very simple consideration of the basic processes involved in evaluation
  • no consideration of quality of evidence although that is considered in another document
  • some (but not all) resources are limited to evaluation of third sector projects

Additional comments

A Stitch in Time, supports the third sector to collect and present evidence about its contribution to Reshaping Care for Older People (RCOP). This 3-year programme ran to March 2015 and focused on third sector organisations working with older people and carers in Lothian.

Food Standards Agency: introduction to evaluation for local authorities

The introduction to evaluation for local authorities: a brief guide to evaluating food enforcement projects and innovative approaches is written to support local authorities in carrying out evaluation in food standards and is for those with some evaluation skills.

Review details

Type of document/accessibility: online PDF

Date: March 2015. Length: 22 pages

Lead author/organisation: Food Standards Agency

Themes

Background to evaluation: Overview of evaluation. Pre-evaluation preparatory work: Ethics. Evaluation processes: Research design and methods, Data collection, Data analysis and interpretation. Types of evaluation: Process evaluation. Additional support: Tools and toolkits, Hiring an evaluator.

Purpose and utility of guidance

This guide introduces the principles of good evaluation, explains some concepts, and outlines things to bear in mind when planning to self-evaluate at LA level. The user will also find pointers on where to look if in need of more detailed information.

Primary target audience

Local authorities who are conducting evaluations related to food law.

Contextual information

The Food Standards Agency is a UK government agency responsible for monitoring food safety and hygiene across the UK.

Summary/overview

Includes:

  • why evaluate
  • evaluation step by step
  • what evaluation is (process, impact and the difference between evaluation and monitoring)
  • ethics
  • principles of evaluation (causality, control, intended and unintended consequences, evaluation design and generalizability large sample approach, small sample approach interpreting, presenting and communicating results)
  • further resources

Strengths

Strengths include:

  • clear and concise
  • good basic overview of evaluation with examples of specific projects in food safety
  • good emphasis on showing causality and intended and unintended consequences
  • useful references with links
  • discusses ethics
  • gives research contact at the FSA

Limitations

Limitations include:

  • title does not make clear the subject of the document
  • tone is quite formal
  • could be more specific about intended target audience and their expected skills
  • it is not detailed enough for beginners but too basic for those with experience (its stated readership is local authorities but local authorities (LAs) will have evaluation experience and researchers and evaluators to offer support)
  • no mention of economic analyses though it does refer to the Magenta Book
  • might be useful to place into the context of other government evaluation work particularly of local authorities (refers to the Magenta Book and Green Book but does not provide enough detail of these and they are both challenging reads for people with little evaluation experience)
  • some of the approaches are dated and not necessarily accurate
  • should refer the user to the research and evaluation support which most likely exists within the LAs and the experience that government has of commissioning research

Additional comments

Would be useful to have outlined the Food Standards Agency’s responsibilities, though perhaps unnecessary for its intended audience.

Emphasis is on research rather than evaluation skills.

Planning evaluability assessments: Department For International Development (DFID)

Planning evaluability assessments is a synthesis of the literature in evaluability of projects in developing countries from 1979 to 2013. Evaluability is defined as ‘the extent to which an activity or project can be evaluated in a reliable and credible fashion’. Recommendations are given for global evaluation.

Review details

Type of document/accessibility: online PDF

Date: October 2013. Length: 58

Lead author/organisation: Department For International Development

Themes

Background to evaluation: Evaluability.

Purpose and utility of guidance

The purpose of this synthesis paper is to produce a practical report that summarises the literature on Evaluability Assessments and gives recommendations based on this.

Primary target audience

The primary audiences for the report are global evaluation advisers and development practitioners involved in commissioning and carrying out evaluations and evaluability assessments.

Contextual information

DFID is the UK Government’s Department for International Development. It works in developing countries to alleviate world poverty. The DFID produced this document to summarise the existing literature on evaluability assessments. It was intended to be a practical guide highlighting the main issues to be considered when conducting or commissioning an evaluability assessment. The document synthesizes 133 documents, and makes 18 recommendations about the use of evaluability assessments.

Summary/overview

This is a synthesis of literature in evaluability (defined generally as the whether it Is possible to evaluate, and whether the information is available to do such an evaluation). In DFID these might be desk based projects taking up to 5 days or country based taking up to 2 weeks and should improve any subsequent evaluation.

This sets out definitions of evaluability which vary widely and makes 18 recommendations about how evaluability assessments should be used based on the synthesised literature.

It states that an evaluability assessment should examine evaluability in principle, given the nature of the project design, and in practice, given data availability to carry out an evaluation and the systems able to provide it. It should also examine the likely usefulness of an evaluation.

The assessment should then affect the design of the evaluation and/or the design of the project.

An evaluability assessment should not be confused with an evaluation.

Many problems are related to weak project design and this can be addressed by engaging stakeholders at the beginning and evaluability assessments can support project designs.

An assessment can take place before a project is approved, to design the monitoring and evaluation framework, to decide whether the evaluation should take place, or to inform the specific design of an evaluation that has now been planned for.

Evaluability assessments should be locally commissioned as these provide the most local support and ideally should be carried out by an independent third party.

Evaluability assessments can offer good value for money, if they are able to influence the timing and design of subsequent evaluations.

Outputs of an evaluability assessment should include both assessments and recommendations.

The relatively low costs of evaluability assessments means that they only need to make modest improvements to an evaluation before their costs can be recovered.

The biggest risk of failure facing an evaluability assessment is likely to be excessive breadth of ambition. It should also be recognised that evaluability assessment may be seen as challenging, if there are already some doubts about a project design.

Strengths

Strengths include:

  • involves several international aid agencies
  • detailed literature review
  • sets out recommendations at each stage
  • considers cost
  • annexes which set out checklists and literature review methods
  • intended for a wider audience beyond DFID

Limitations

Limitations include:

  • focused on international development (although could be used usefully by others).
  • papers included since 1979 which suggests that many of these are likely to be out of synch with current developments in evaluation although the majority of papers were published in the later years.
  • covers evaluability rather than evaluation

Additional comments

This covers evaluability. such as whether it is possible to evaluate a programme feasibly and whether the information to evaluate is there, rather than evaluation itself (but it is still relevant to evaluation).

The Magenta Book: guidance to evaluation

The Magenta Book: guidance for evaluation is written for those analysts and policy-makers working in or with the UK government in order to support evidence for policymaking. The first half is aimed at policymakers and the second half at analysts and is to be used in conjunction with the Magenta Book.

Review details

Type of document/accessibility: online PDF, also available in print format

Date: 2011. Length: 141

Lead author/organisation: HM Treasury

Themes

Background to evaluation: Overview of evaluation, Assessing the evidence, Policy and evaluation. Pre-evaluation preparatory work: Budgeting, Contracting and communicating, Pilot testing, Ethics, Evaluation planning, Logic modelling, Stakeholder involvement. Evaluation processes: Overview of evaluation processes, Research design and methods, Data collection. Types of evaluation: Process evaluation, Outcome evaluation, Economic evaluation.

Purpose and utility of guidance

Use this document if you are reviewing or assessing a policy or project within government.

Primary target audience

The new guidance recognises evaluation’s place at the heart of policy development, and emphasises that the ability to obtain good evaluation evidence rests as much on the design and implementation of the policy as it does on the design of the evaluation. This gives policymakers much more of the responsibility for securing good evidence than was previously the case.

Contextual information

The Government is committed to improving central and local government efficiency and effectiveness, and in times of constrained public finances, it is even more important to ensure that public funds are spent on activities that provide the greatest possible economic and social return. This requires that policy is based on reliable and robust evidence. High-quality evaluation is vital to this.

HM Treasury’s Green and Magenta Books together provide detailed guidelines, for policymakers and analysts, on how policies and projects should be assessed and reviewed.

The 2 sets of guidance are complementary: the Green Book emphasising the economic principles that should be applied to both appraisal and evaluation, and the Magenta Book providing in-depth guidance on how evaluation should be designed and undertaken. The Magenta Book is the recommended central government guidance on evaluation that sets out best practice for departments to follow.

It is intended to explain:

  • the important issues and questions to consider in how evaluations should be designed and managed
  • the wide range of evaluation options available
  • why evaluation improves policy making
  • how evaluation results and evidence should be interpreted and presented
  • why thinking about evaluation before and during the policy design phase can help to improve the quality of evaluation results without needing to hinder the policy process

Summary/overview

Part A is designed for policymakers. It sets out what evaluation is, and what the benefits of good evaluation are. It explains in simple terms the requirements for good evaluation, and some simple steps that policymakers can take to make a good evaluation of their intervention more feasible. It also discusses some of the issues around the interpretation and presentation of evaluation results, particularly as they relate to the quality of the evaluation evidence.

Part B is aimed at analysts and interested policymakers and is therefore more technical. It discusses in greater detail the main steps to follow when planning and undertaking an evaluation and how to answer evaluation research questions using different evaluation research designs. It also discusses approaches to the interpretation and assimilation of evaluation evidence.

Includes:

  • policy evaluation: what it is, what benefits it can bring, what affects how a policy should be evaluated and where evaluation fits in the policy cycle
  • choosing the right kind of evaluation for a policy, measuring the process and the impact, assessing whether the benefits justified the costs and considering economic evaluation
  • building impact evaluation into policy design and the role of comparison groups in identifying the impact of a policy
  • designing an evaluation, the stages, governance, quality control, level of resources and timing
  • the evaluation framework
  • theory-based evaluation
  • reviewing the existing evidence
  • systematic review
  • rapid evidence assessment
  • meta-evaluation and meta-analysis
  • making sense of existing and new evidence
  • data collection- tools and ethical issues.
  • process evaluation: action research and case studies
  • evaluating implementation and delivery
  • research methods
  • empirical impact evaluation
  • drawing together and reporting evaluation evidence
  • how evaluation evidence may be used (drawing together the evaluation evidence, setting the evaluation results in a broader context)
  • future decisions and roll-out
  • reporting and disseminating findings

Strengths

Strengths include:

  • provides a good generic overview of what evaluation is and how to do it in relation to policy development
  • regularly updated
  • relatively simple to understand and does not assume much knowledge of evaluation
  • discusses how things should be changed and developed as a result of an evaluation rather than ending at the dissemination stage.
  • brings different aspects of evaluation needed for government projects into one area.
  • focuses largely on how to conduct an evaluation to ensure monetary worth of policy

Limitations

Limitations include:

  • is intended for use by policymakers so its relevance to other public health practitioners is limited
  • presents a simplistic overview of evaluations and omits many of the complexities associated with evaluating complex interventions
  • focuses largely on conducting evaluation to ensure monetary worth rather than on identifying processes
  • sees process evaluation (largely) as an assessment of fidelity and acceptability.

The Green Book: appraisal and evaluation in central government

Additional comments

Complements the Green Book which focuses on economic appraisal.

Guidance on evaluation and review for DFID staff

Guidance on evaluation and review for DFID staff is targeted at those working for Department for International Development (DFID) who will be commissioning, managing, reporting or responding to evaluation in international development rather than carrying it out. It gives an overview of evaluation and the gives guidance on how to commission and respond to it.

Review details:

Type of document/accessibility: online PDF

Date: 2005. Length: 89

Lead author/organisation and source: GOV.UK

Themes

Background to evaluation: Overview of evaluation. Pre-evaluation preparatory work: Contracting and communications, Evaluation planning, Stakeholder involvement. Evaluation processes: Overview of evaluation process, Research design and methods, Data collection, Data analysis and interpretation. Types of evaluation: Process evaluation, Outcome evaluation, Economic evaluation. Additional support: Tools and toolkits.

Purpose and utility of guidance

To support Department for International Development staff.

Primary target audience

Department for International Development staff

Contextual information

The Department for International Development leads the UK’s work to end extreme poverty and is responsible for:

  • honouring the UK’s international commitments and taking action to achieve the Millennium Development Goals
  • making British aid more effective by improving transparency, openness and value for money
  • targeting British international development policy on economic growth and wealth creation
  • improving the coherence and performance of British international development policy in fragile and conflict-affected countries
  • improving the lives of girls and women through better education and a greater choice on family planning
  • preventing violence against girls and women in the developing world
  • helping to prevent climate change and encouraging adaptation and low-carbon growth in developing countries.

Agreement on a common agenda for development in the form of the UN Millennium Development Goals (MDGs) has encouraged the different stakeholders in the international development community to come together to support the main drivers of change in a more coherent way. These drivers of change are global and regional as much as national and involve work with many different stakeholders.

Evaluating this level of development activity requires increasingly complex, multi-stakeholder, thematic evaluations and covers areas such as:

  • conflict reduction and prevention
  • fair international trade
  • environment agreements
  • gender, human rights and democracy
  • a new international financial architecture
  • more effective working arrangements amongst the UN family of institutions and other global institutions
  • a stronger national and international civil society
  • country ownership and leadership of poverty reduction processes.
  • attempts to harmonise activities amongst donors

Summary/overview

Chapters 1 to 3 provide detail about evaluation in an international development context. This includes:

  • the need for evaluation of DFID projects
  • what evaluation is
  • why they are conducted
  • types of evaluation (such as, formative/summative/self-evaluation/participatory evaluation/process evaluation/program evaluation/sector/country)

Chapter 4 provides those about to plan or commission an evaluation with a step by step guide to the evaluation process including:

  • what kind of evaluation is needed
  • how the evaluation can improve development effectiveness
  • what do DFID’s corporate systems require
  • who needs to be involved
  • how should the evaluation be planned and evaluated
  • how the likelihood of risks can be reduced

Some of the trickier issues are discussed at the end of chapter 4.

Chapter 5 discusses terms of reference, and chapter 6 discusses evaluation teams. Chapter 7 is about reporting (who, when, why, and how). Chapter 8 discusses how to use evaluation and share lessons, ensuring that findings and lessons are targeted to potential users.

Chapter 9 highlights the need to ensure that all roles are clear from the onset and describes different possible stakeholders and their roles:

  • primary beneficiaries
  • evaluators
  • other users
  • implementers
  • commissioners

A list of definitions and standards are presented as an appendix. Standard international criteria on what should be covered in an evaluation includes 5 measures to be applied to every evaluation:

  • relevance
  • effectiveness
  • efficiency
  • impact
  • sustainability

A glossary of standard evaluation terms is also provided.

Strengths

Strengths include:

  • helpful overview of evaluation, and how to plan evaluations within a very specific context
  • useful for those who want some background to evaluation, those who are new to evaluation but are going to be commissioning or doing one and for those who are more experienced in evaluation
  • set within the International development policy context
  • global perspective
  • specific to those working in evaluation in international development rather than another general evaluation guide
  • it also has a guide to standard evaluation definitions and terms
  • set within country programmes and evaluations (countries were DFID working)
  • some mention of economic effectiveness
  • emphasis on results approach
  • includes costing information and how this works within government procurement
  • discusses ‘soft’ skills in evaluation (for example the pressure of being evaluated, the difficulty of travelling for the evaluator and what to do if sensitive issues emerge)
  • sets out the skills needed by the evaluation team
  • includes checklists at the end of each chapter for the evaluator to check his or her progress

Limitations

Limitations include:

  • the document is very targeted to evaluation of DFID strategies, which may limit its use for workers outside this department
  • whilst the document provides good detail on planning an evaluation, the document provides no information on how to complete the evaluation (and assumes an external evaluator will do this)

Additional comments

This guidance is probably too specific to DFID to be of use to anyone working outside this area.

It was withdrawn in June 2015. There is an updated version coming out in the near future although most of this is still relevant.

The Green Book: GOV.UK

The Green Book: appraisal and evaluation in central government is a UK government book to support public funds being spent on activities that give the greatest benefits.

Review details

Type of document/ accessibility: online PDF

Date: July 2011 updated (original 2003). Length: 114

Lead author/organisation: HM Treasury

Themes

Background to evaluation; Assessing the evidence. Pre-evaluation preparatory work: Budgeting, Contracting and communications, Pilot testing, Stakeholder involvement. Types of evaluation: Economic evaluation.

Purpose and utility of guidance

To support public funds being spent on activities that give the greatest benefits.

Primary target audience

All appraisers and evaluators: particularly useful for those working for or within government and focusing on an economic approach. Specifically, anyone required to conduct a basic appraisal or evaluation of a policy, project or programme and those seeking to expand their knowledge in this area.

Contextual information

HM Treasury is the financial and economic department of the UK government. It is responsible for allocating budgets, managing all government spending and planning future spend.

Summary/overview

The purpose of the Green Book is to ensure that before a policy, programme or project is adopted, 2 questions are answered:

  • are there better ways to achieve this objective?
  • are there better uses for these resources?

This guidance is designed to promote efficient policy development and resource allocation across government by informing decision-making and improving the alignment of department policies with government priorities.

It emphasises the need to take account of the wider social costs and benefits of proposals, and the need to ensure the proper use of public resources. It also suggests considering equalities through identifying other possible approaches which may achieve similar results. Where possible by attributing monetary values to all impacts of any proposed policy, project and performing an assessment of the costs and benefits for relevant options.

It aims to make the appraisal process throughout government more consistent and transparent and is a guide for all departments and agencies and for any project of any size.

It describes how the economic, financial, social and environmental assessments of a policy, programme or project should be combined.

It contains information on how to conduct advanced appraisal and what the analytic foundation is and emphasises that appraisal should be planned for at the beginning of a new policy.

As well as supporting the assessment and evaluation of policy it should also be used to support decisions on use or disposal of existing assets and new or replacement of capital resources as well as major procurement decisions.

It contains a technical annexe with more details of specific technical issues.

Strengths

Strengths include:

  • a thorough and comprehensive guide to support government spending to maximise benefit
  • it considers the wider economic and social costs including such issues as social inequalities and discounting
  • extremely useful for those working for and with governments

Limitations

Limitations include:

  • it does not include such information as engaging stakeholders and supporting change, nor does it aim to
  • although it claims it is for all assessors and evaluators and for all levels of programs it would need to be adapted if used outside government and in smaller programs and it may often be the case that the necessary information does not exist
  • it is most useful for those who already have economic evaluation and general evaluation experience (it is unlikely that the guide could be used without such support)

Best read in conjunction with The Magenta Book

A guide for First Nations: evaluating health program by Health Canada

A guide for First Nations on evaluating health programs is a basic evaluation guide for First Nations communities in Canada that are taking control of their health programs. It links evaluation and programme management and shows how lessons can be transferred across settings. It links the whole programme right down to each individual goal, objectives, indicators and data and advises on how to plan each stage of an evaluation.

Review details

Type of document/accessibility: online PDF

Date: Updated until archived on 25 July 2013. Length: 27

Lead author/organisation: Health Funding Arrangements Division, Program Policy Transfer Secretariat and Planning Directorate, First Nations and Inuit Health Branch FNIHB), Health Canada

Themes

Background to evaluation: Overview of evaluation. Pre-evaluation preparatory work: Needs assessment, Evaluation planning, Logic modelling. Evaluation processes: Overview of evaluation processes, Defining questions, Choosing outcomes, Describing the intervention, Data collection. Types of evaluation: Community projects.

Purpose and utility of guidance

A basic evaluation guide for First Nations communities that are taking control of their health programs under the Department’s Health Transfer Initiative.

Primary target audience

Those working in first nation community programs.

Contextual information

Health Canada is the Federal department responsible for helping Canadians maintain and improve their health, while respecting individual choices and circumstances. It has a Department Transfer Initiative which allows first nation communities to take control of their own community health. This guide is provided within that context and aims to be a basic evaluation guide to help them to start evaluating their programs.

Summary/overview

This guide gives basic background on program evaluation, defining what it is and why communities have to evaluate their programmes. It then links evaluation and programme management showing how evaluation can assess whether a programme is achieving what it has set out to achieve and how you can use lessons learned in another setting.

It also:

  • outlines a community needs assessment and health plan and outlines the need to collect the correct data
  • summarises how to plan for program evaluation linking the overall program goal to the objectives, the activity, the indicator, and to the data
  • describes how to prepare an evaluation plan, what to do to carry out an evaluation and what should happen after an evaluation (dissemination, report to stakeholder and planning for change)
  • has a clear diagram of a logic model (appendix).

Strengths

Strengths include:

  • clear and easy to read.
  • uses relevant examples throughout to illustrate points including clear illustrative diagrams
  • gives quite detailed advice on choosing the evaluation questions, indicators and so on with relevant examples
  • could be used by someone with no evaluation experience
  • takes user through the steps explaining in a simple way how to carry out an evaluation, what kind of questions you could ask, indicators and what data to collect to do this

Limitations

Limitations include:

  • no discussion of cost or cost-effective evaluation
  • no discussion of planning for change although it does mention that evaluation is an ongoing process and that change should occur if necessary as part of the results.
  • aimed only at those working in community evaluation

There are other documents accessible from this web page which describe the health transfer initiative but these are not specifically related to evaluation.

Additional comments

States that it is aimed at First Canadians or Inuits but could be used by anyone working in community evaluation and in other countries.

Joseph Rowntree Foundation: evaluating community projects

Guidelines in the ‘Evaluating community projects: a practical guide’ were initially developed as part of the JRF Neighbourhood Programme, which is a programme made up of 20 community or voluntary organisations all wanting to exercise a more strategic influence in their neighbourhood. The guidelines were originally written to help these organisations evaluate their work. They provide step-by-step advice on how to evaluate a community project which will be of interest to a wider audience.

Review details

Type of document/accessibility: online PDF

Date: 2003. Length: 12

Lead author/organisation: Joseph Rowntree Foundation, Marilyn Taylor, Derrick Purdue, Mandy Wilson and Pete Wilde

Themes

Background to evaluation: Overview of evaluation. Types of evaluation: Community projects. Additional support: Tools and toolkits.

Purpose and utility of guidance

To support community organisations to carry out evaluation.

Primary target audience

Community and voluntary organisations.

Contextual information

The JRF is an independent development and social research charity supporting a programme of research and development projects in housing, social care and social policy.

Summary/overview

These guidelines provide step-by-step advice on how to evaluate a community project.

How to evaluate: a step-by-step approach:

  • step 1: review the situation
  • step 2: gather evidence for the evaluation
  • step 3: analyse the evidence
  • step 4: make use of what you have found out
  • step 5: share your findings with others

Strengths

Strengths include:

  • useful for community projects
  • clear
  • annex on explaining findings to funders
  • emphasis on using the results

Limitations

Limitations include:

  • now 14 years old
  • basic guide only
  • lacks economic effectiveness information
  • one of many similar guides

Medical Research Council: developing and evaluating complex interventions

The Developing and evaluating complex interventions: new guidance guide is aimed at researchers.

Review details

Type of document/accessibility: online PDF

Date: 2008. Length: 39

Lead author/organisation: Medical Research Council (MRC)

Themes

Background to evaluation: Assessing the evidence, Evidence-based medicine, Using theory in evaluation. Pre-evaluation preparatory work: Budgeting, Pilot testing, Logic modelling. Evaluation processes: Defining questions, Choosing outcomes, Describing the intervention, Research design and methods, Data analysis and interpretation. Types of evaluation: Process evaluation, Outcome evaluation, Economic evaluation, Fidelity.

Purpose and utility of guidance

Intended to help researchers choose and implement appropriate methods for evaluating complex interventions.

Primary target audience

Producers and users of research (such as researchers, research funders, journal editors)

Contextual information

The MRC supports research across the entire spectrum of medical sciences, in universities and hospitals, in the UK and Africa. The guidance was produced by researchers in the MRC Population Health Sciences Research Network.

Summary/overview

Provides guidance on the development, evaluation and implementation of complex interventions to improve health. It updates a previous MRC framework, extending it to non-experimental methods, and complex interventions outside the health service. There are a series of questions for each stage.

When developing an intervention, check:

  • are you clear about what you are trying to do, outcome aiming for, and how to bring about change
  • is there a coherent theoretical basis which is used to develop the intervention
  • can you describe the intervention fully, so that it can be implemented properly for the purposes of your evaluation, and replicated by others
  • does the existing evidence suggest that it is likely to be effective or cost effective
  • can it be implemented in a research setting, and is it likely to be widely implementable if the results are favourable (if not, then it needs developing further)

Piloting and feasibility checklist:

  • have you done enough piloting and feasibility work to be confident that the intervention can be delivered as intended
  • can you make safe assumptions about effect sizes and variability, and rates of recruitment and retention in the main evaluation study

Evaluating the intervention:

  • what design are you going to use, and why
  • is an experimental design preferable and if so, is it feasible
  • suggests alternative designs that might be better under different circumstances
  • have you set up procedures for monitoring delivery of the intervention and overseeing the conduct of the evaluation and considered process evaluation and economic evaluation

Reporting:

  • has it been reported appropriately
  • have you updated your systematic review

To enable replication studies or wider scale implementation, requires a detailed account of the intervention as well as a standard report of the evaluation methods and findings.

Implementation:

  • are your results accessible to decision-makers presented persuasively
  • are your recommendations detailed and explicit

Ongoing monitoring should be undertaken to detect adverse events or long-term outcomes that could not be observed directly in the original evaluation, or to assess whether the effects observed in the study are replicated in routine.

Strengths

Strengths include:

  • excellent questions that researchers should ask themselves at the start of an evaluation
  • good clear case studies of complex evaluations
  • focus on complex interventions which reflect real world situations
  • good for experienced researchers as provides technical support rather than an evaluation overview
  • researchers often involved in evaluation but might not have evaluation skills however other evaluation guides too general for them
  • includes a case study on economic effectiveness
  • wide consultation

Limitations

Limitations include:

  • mainly aimed at researchers
  • does not discuss evaluation theory
  • no qualitative description and qualitative case study

Developing and evaluating complex interventions: the new Medical Research Council guidance BMJ 2008;337:a1655

Additional comments

Update of 2000 Framework.

Medical Research Council: process evaluation of complex interventions

The Process evaluation of complex interventions document provides researchers, practitioners, funders, journal editors and policy-makers with guidance in planning, designing, conducting and appraising process evaluations of complex interventions. It focuses on health but is relevant to other domains. A complex document most useful for researchers.

Review details

Type of document/accessibility: online PDF

Length: 134

Lead author/organisation: MRC Population Health Science Research Network

Themes

Background to evaluation: Using theory in evaluation. Pre-evaluation preparatory work: Developing a protocol, Budgeting, Evaluation planning, Logic modelling, Stakeholder involvement. Evaluation processes: Choosing outcomes, Describing the intervention, Research design and methods, Data analysis and interpretation. Types of evaluation: Process evaluation, Outcome evaluation, Fidelity.

Purpose and utility of guidance

Guidance on evaluation of complex interventions.

Primary target audience

Primarily researchers but also practitioners, funders, journal editors and policymakers.

Contextual information

MRC Population Health Science Research Network (PHSRN) was established by UK Medical Research Council in 2005 to focus on methodological knowledge transfer in population health sciences.

Summary/overview

Provides an overview of process evaluations divided into 2 sections: process evaluation theory and process evaluation practice. Written from the perspective of researchers who have experience of process evaluation of complex health interventions.

It states why it is necessary, what it is, and how it should be done:

Allows policy-makers, practitioners and researchers to identify which aspects of interventions are effective at a fine-grained level so that you can improve the aspects that are not effective. If an intervention is effective in one context, what does a policy-maker need to know so that it will be effective in another? What were the mechanisms through which it was achieved and how can this vary by population and setting rather than looking at a straightforward outcome?

If not, is the failure due to the intervention itself or poor implementation? What information do systematic reviewers need to be confident that they are comparing interventions which were delivered in the same way; understand why the same intervention has different effects in different contexts?

Process evaluations aim to provide the more detailed understanding needed to inform policy and practice by examining implementation, the mechanisms of impact and the context.

Process evaluations may be conducted within feasibility testing phases, alongside evaluations of effectiveness, or alongside post-evaluation scale-up. When designing and conducting a process evaluation, evaluators should:

  • describe the intervention and clarify causal assumption
  • identify main uncertainties and potential questions
  • agree scientific and policy priority
  • identify previous process evaluation
  • select a combination of quantitative and qualitative methods and consider collecting this at multiple time points to capture change

The guide includes information on:

  • analysing: what is necessary when analysing process data
  • reporting: how it should be reported in line with existing reporting guidelines, publishing journal articles and shows how the findings can be widened into other areas
  • how and when to share information with policymakers and practitioners
  • the relationship between evaluation teams
  • resources and staffing
  • defining the intervention and clarifying main assumptions
  • what do we know already and what will this study add
  • core aims and research questions
  • selecting appropriate methods
  • analyses of different methods
  • integration of process evaluation and outcomes findings
  • reporting findings of a process evaluation both to wider audience and academic journals

Strengths

Strengths include:

  • detailed description of process evaluation
  • includes case studies
  • theory and practice dealt with separately and users can find area most interested in
  • directs readers to sections which are most relevant for them
  • focuses on a deeper understanding of interventions and a more fine-grained account of how they have an effect
  • a more complex level suitable for experienced researchers (many evaluation guides are basic and aimed at beginners)
  • good discussion of relationships which other guides do not cover

Limitations

Limitations include:

  • not dated
  • unclear whether funders (as an example) would read such a complex and lengthy document (although there are subsequent and more simple guides)

It might be useful to draw out lessons for non-health audience and publish in relevant journals if not done so.

Additional comments

Best read alongside other MRC evaluation documents.

Medical Research Council: using natural experiments to evaluate population health interventions

The Using natural experiments to evaluate population health interventions: guidance for producers and users of evidence guide aims to give general guidance on the range of approaches available for natural experiments (for example, policy change) in population health and the circumstances in which they are likely to be useful. It also aims to bring the methodological literature together which is currently dispersed across disciplines.

Review details

Type of document/accessibility: online PDF

Length: 29

Lead author/organisation: Medical Research Council

Themes

Pre-evaluation preparatory work: Budgeting. Evaluation processes: Choosing outcomes, Describing the intervention, Research design and methods, Data collection, Data analysis and interpretation

Purpose and utility of guidance

General guidance on evaluation of population health interventions.

Primary target audience

Producers, users, funders and publishers of evidence.

Contextual information

The MRC supports research across the entire spectrum of medical sciences in the UK and the MRC unit in Africa. This guide was produced by those working in population health.

Summary/overview

The guide describes the evaluation of interventions that were not under the control of the researchers (such as new policies) and methods of drawing conclusions about their impact. The guide provides a detailed overview of:

  • what natural experiments are
  • examples in public health (section 2)
  • a review of ways of improving the use of natural experiments and how this can be useful for policymakers (section 3)
  • guidelines for improving design, analysis and reporting (section 4)

It states that natural experiments are those which are not under control of researchers. They might occur naturally or be a result of interventions or policies. These can be useful for researchers as they can use variation in exposure to analyse impact.

In order to do this, the variation in exposure and outcomes must be analysed using methods that attempt to make causal inferences. For example, the effects of clean air legislation, indoor smoking bans or the effects of economic downturn. The results of these analyses could lead to changes in policy or support policies in similar areas.

The case for a natural experimental study is strongest when there is scientific uncertainty about the size or nature of the effects of the intervention or when for practical, political or ethical reasons, the intervention cannot be introduced as a true experiment. For example, when comparing groups who have had different levels of exposure to something.

A ‘value of information’ analysis can help to make a convincing case for a natural experimental study, and economic evaluation may make the results more useful for decision-makers.

Randomised controlled trials and natural experimental studies are both subject to similar threats to validity. However, it is more difficult to minimise bias in natural studies. Natural experiments can be used to study more subtle effects, so long as a suitable source of variation in exposure can be found, but the design and analysis become more challenging.

Care is required to minimise bias and different methods are outlined to help with this.

The plausibility of causal inferences should be tested.

There are guidelines which can be followed to support the use of natural experiments. For example, STROBE or TREND. These state that the approach should be clearly identified, the assignment, process and intervention described and the methods clearly stated. Bias should be reduced using both qualitative and quantitative methods. Wherever possible, the results should be compared with those of other evaluations of similar interventions, paying attention to any associations between effect sizes and variations in evaluation methods and intervention design, content and context.

Natural experimental approaches have taught us a great deal. For example, the replacement of coal gas with natural gas and the effect of the smoking ban. However, planned experiments should not be discounted and often Randomised Control Trials will be the only way to test effects validly.

Priorities for the future are to build up experience of promising but lesser used methods and to improve by, for example, including good routine data from population surveys and administrative data.

Strengths

Strengths include:

  • few of the other evaluation guides addressed natural experiments explicitly
  • reflects real world of evaluation
  • written by experts in the field

Limitations

Limitations include:

  • date of publication not clearly stated
  • UK focused
  • states that it is for policymakers and so on but it is written very much in the style of an academic report for a research audience
  • would be useful to have a plain English version

Additional comments

Read in conjunction with other MRC evaluation guides from the same group.

National Science Foundation: user-friendly handbook 2002

The 2002 User-Friendly Handbook for Project Evaluation is designed so that each chapter addresses a specific step in the evaluation process:

  • what evaluation is
  • external evaluators
  • preparation for evaluation
  • what to include
  • what information is needed to meet objectives
  • making sense of data
  • reporting and disseminating

Review details

Type of document/accessibility: online PDF

Date: 2002. Length: 86

Lead author/organisation: Directorate for Education and Human Resources: Division of Research, Evaluation and Communication, National Science Foundation

Themes

Background to evaluation: Overview of evaluation, Using theory in evaluation. Pre-evaluation preparatory work: Budgeting, Evaluation planning, Logic modelling, Stakeholder involvement. Evaluation processes: Overview of evaluation process, Defining questions, Choosing outcomes, Describing the intervention, Research design and methods, Data collection, Data analysis and interpretation. Types of evaluation: Overview of types of evaluation. Additional support: Hiring an evaluator.

Purpose and utility of guidance

This handbook is aimed at those without evaluation knowledge or skills and aims to blend technical knowledge with common sense.

Primary target audience

Specifically written for National Science Foundation (NSF) and its stakeholders to provide managers with a basic guide for the evaluation of NSF’s educational programs.

Contextual information

The NSF is an independent federal agency in the US which aims to ‘to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defence’. It funds a quarter of university science research in the US.

Summary/overview

This is a basic handbook of evaluation for those who have no experience in this area.

Section 1:

  • describes what evaluation is
  • types of evaluation
  • how evaluation differs from other data collection

Section 2 describes the different steps of evaluation:

  • developing a model and engaging stakeholders
  • developing measurable questions and objectives
  • choosing a design and methodological approach
  • choosing who will be included in the evaluation

Section 3 describes issues with data collection including:

  • time
  • cost
  • rigour
  • mixed methods

Section 4 emphasises the importance of cultural issues in order to ensure that the evaluation is suitable for different groups and that the findings are used by these groups.

Strengths

Strengths include:

  • clearly written
  • good basic level introduction to research and evaluation, defining such terms as sampling and quantitative and qualitative research
  • discusses the cost and time of using different approaches in evaluation
  • discusses making evaluation culturally responsive and how this will help to ensure that the results are used.
  • gives further sources of support
  • has an appendix on how to find an evaluator

Limitations

Limitations include:

  • could give more detail on planning for change
  • Evaluation theory has developed since 2002
  • could be clearer about the country in which it is published
  • no consideration of cost-effectiveness as part of evaluation of a programme
  • combines basic research skills and basic evaluation skills but this is at such a basic level that someone with these skills would not be able to carry out research or evaluation alone
  • generally more focused on research skills than evaluation skills
  • not all evaluation requires sampling people and doing interviews and questionnaires. Data can be available from other sources and systems can be evaluated without research participants
  • describes dissemination as the final stage and does not discuss implementation of change

Additional comments

This handbook assumes no evaluation or research knowledge and much of it is concerned with giving its readers knowledge of basic research terms and skills.

NHS: evaluation toolkit 2003

The National Health Service evaluation tool kit 2003 aims to facilitate the use and creation of evidence by building evaluation into the commissioning cycle in NHS Avon.

Avon Primary Care Research Collaborative supports those working in commissioning and provides support, advice, guidance and training to commissioners, programme and project managers, and members of health integration teams who are planning to undertake an evaluation. This support is at all stages of the evaluation, including design, delivery, dissemination and implementation of findings. They also work with researchers in evaluation research.

Review details

Type of document/accessibility: website

Date: 2003 to 2015

Lead author/organisation: National Health Service (NHS), Avon Primary Care Research Collaborative (APCRC)

Themes

Background to evaluation: Overview of evaluation, Assessing the evidence, Using theory in evaluation. Pre-evaluation preparatory work: Developing a protocol, Evaluation planning, Stakeholder involvement. Evaluation processes: Overview of evaluation processes, Defining questions, Research design and methods. Types of evaluation: Overview of types of evaluation. Additional support: Tools and toolkits, Quality assurance, Training.

Purpose and utility of guidance

NHS evaluation toolkit.

Primary target audience

Specifically for commissioners in the Avon county, although some tools have wider applicability.

Contextual information

The Avon Primary Care Research Collaborative is responsible for primary care related research across the Bristol, North Somerset and South Gloucestershire area. This includes supporting service evaluation.

The overall aims are to:

  • build a portfolio of NHS-relevant research through close working with colleagues from local universities
  • ensure that high-quality evaluation is routinely considered as part of every commissioning cycle

Summary/overview

The website starts with an overview of what evaluation is and why it is needed. A number of services are offered by the APCRC, and these are also listed.

A tab termed ‘evaluation methodology’ provides an overview of the difference between qualitative and quantitative approaches, and the difference between monitoring and evaluation. Links are provided to PDF documents that discuss:

  • different designs
  • data collection
  • costing
  • commissioning an external evaluation
  • writing and disseminating findings.
  • training courses in the local area/at local institutes

The site presents a 10-page document entitled ‘Evaluation for commissioners – tool kit’. This includes:

  • top tips
  • important questions
  • Evaluation process flow chart
  • scoring grid to assess the level of evaluation required
  • checklist for planning service evaluation
  • decision tree for evaluation methods
  • quality assurance criteria grid
  • template for an evaluation protocol

Patient and public involvement is explained and encouraged. Links to organisations such as involve are provided. The final tab contains links to other organisations. This includes the UK Evaluation Society, charities evaluation services and research methods knowledge base.

Strengths

Strengths include:

  • provides a clear description of the basic processes involved in evaluation
  • useful and practical resources to support evaluation (for example, checklists and flowcharts)
  • provides support and links to academic establishments to support evaluation for those within the local area.
  • provides advice on how to commission external evaluators
  • provides bespoke evaluation training
  • provides support in patient and public involvement
  • gives links to other places where you can get evaluation support
  • provides a tool to help people decide if their project is evaluation, audit or research

Limitations

Limitations include:

  • advice may be too simple to enable practitioners to conduct a comprehensive evaluation without additional support
  • some services may only be available to people in the Avon area

National Health Scotland: LEAP for health

LEAP (Learning, evaluation and planning) for health focuses on evaluation of community projects. Aims to integrate evaluation into community projects with an emphasis on local needs, a process of change and outcomes to improve the quality of life.

Review details

Type of document/accessibility: online PDF

Date: 2003. Length: 86

Lead author/organisation: NHS Health Scotland

Themes

Background to evaluation: Using theory in evaluation. Pre-evaluation preparatory work: Budgeting, Contracting and communication, Needs assessment, Evaluation planning, Stakeholder involvement. Evaluation processes: Defining questions, Choosing outcomes, Describing the intervention, Research methods and design, Data Collection. Types of evaluation: Outcome evaluation, Community projects. Additional support: Training.

Purpose and utility of guidance

A resource for those involved in promoting health and well-being in community settings, whether in community projects, primary care, clinical practice, health promotion or public health.

Primary target audience

Those involved in promoting health and well-being in communities.

Contextual information

NHS Health Scotland is a national health board working with public, private and third sectors to reduce health inequalities and improve health.

The Learning, evaluation and planning (LEAP) approach is based on the assumption that agencies and workers should plan and evaluate their own work in partnership with one another and with those they aim to assist. All such people are referred to in this resource as stakeholders.

It emphasises self-evaluation, and is based on the assumption that evaluation should be an integral part of community health promotion; that providers and receivers should be involved; that the main aim should be for continual improvement; and lessons learned should inform future work.

The document criticises the traditional approach to evaluation as not focusing on local needs, emphasising money above all else and giving too little attention to processes of change. The LEAP approach suggests that the approach to evaluation should be characterised by more attention being paid to the process of change and to the outcomes that improve the quality of life. This is the approach to evaluation that they have adopted. It was developed and tested with community participants

Summary/overview

The purpose of this resource is to:

  • describe and explain the main components of the LEAP framework
  • set out criteria for planning and evaluating processes and tasks
  • discuss some of the issues involved in using the framework, provide information on methods and techniques for using the framework
  • consider how evaluation can inform planning, management /supervision, review and development

The document:

  • has a strong emphasis on the theoretical basis of evaluation and how this relates to practice
  • its main aims are to clarify what is involved in community health and well-being, help stakeholders to plan their project and evaluate and improve it (focusing on the difference it had made to individuals and communities)
  • sets out what community health and wellbeing is and states that this work focuses on enabling people to improve and maintain their health through community actions
  • highlights the policy context though does not refer to specific policy
  • focuses on the importance of improving outcomes and doing this by involving people, using evidence and learning from experience
  • outlines different approaches to evaluation and how to plan this into programmes at whatever level required.
  • emphasises community health and well-being by involving people, building networks and strengthening communities and sets out what barriers there might be to participation
  • describes the 5stage of LEAP: what needs to change, how will we know, how will we go about it, how will we know we did it, did we do it and was it useful, how do we review and improve
  • supports reader to identify indicators and to focus on outcomes that show improvement
  • provides action planning tables, worksheets and worked examples and practice notes as well as reading and resources

Strengths

Strengths include:

  • step by step guide which is clear and easy to read
  • simple and clear approach to determining what a project is doing and if it is working
  • focuses on community health
  • discusses the asset-based approach of community improvement
  • LEAP emphasises the importance of processes as well as outcomes
  • importance of stakeholder involvement is strongly emphasised
  • focus is on community improvement rather than on financial outputs
  • aims to provide evidence on how a community intervention has had a positive effect
  • provides worked examples
  • references further reading and resources
  • resource developed and tested with community participants

Limitations

Limitations include:

  • NHS Scotland focused though could be used by others in community health
  • no mention of economic evaluation or cost-effectiveness
  • no mention of comparing projects
  • limited to evaluation of community projects
  • provides a very clear definition of community health and wellbeing - the document may be of limited use for those working outside that definition
  • could provide more details on relevant policies

Learning, evaluation and planning (LEAP) framework

LEAP plan and evaluate website

National Health Scotland: mental health improvement

The Guide 1: Evidence-based practice evaluation is written for the NHS Scotland mental health context. This is a series of 4 guides which aim to encourage, support and improve standards in the evaluation of mental health improvement initiatives:

  • evidence-based practice
  • measuring success
  • getting results
  • making an impact.

Review details

Type of document/accessibility: online PDF

Date: March 2005. Length: 32

Lead author/organisation: NHS Health Scotland

Themes

Background to evaluation: Overview of evaluation, Assessing the evidence, Evidence based medicine, Common challenges. Additional support: Tools and toolkits.

Purpose and utility of guidance

A series of guides on mental health evaluation.

Primary target audience

It is aimed specifically at those working in the mental health field who are doing evaluations to support them to use evidence to inform the design and delivery of interventions.

Contextual information

Health Scotland is the health improvement agency for Scotland and compiled this on behalf of the National Programme for Improving Mental Health and Well-being

Summary/overview

The first of the 4 guides provides information on how existing literature can be used to inform the design and delivery of interventions. It considers issues around evidence, and how evidence can be used to inform intervention development.

The second guide discusses how to develop indicators to measure progress. This includes how to develop indicators that are robust, valid and reliable, and also provides links to existing indicators.

The third guide, ‘Getting results’, is about planning and implementing an evaluation. It provides an overview of the main stages involved in planning and implementing an evaluation.

It includes:

  • involving stakeholders
  • agreeing objectives
  • choosing methods
  • collecting data
  • implementation issues
  • links to further sources of information

The final guide is ‘Making an impact’. This provides details on how to analyse and interpret data, communicate findings, and add to the evidence base.

Strengths

Strengths include:

  • good overview of the area
  • illustrates theory with useful examples
  • clear and easy to read
  • provides links to resources
  • provides a glossary of terms
  • available in different formats

Limitations

Limitations include:

  • possibly too simple to use alone
  • specifically for mental health
  • specifically for the Scottish context although could be used in other contexts
  • it is likely that this field has developed in the 10 years since this was published.
  • requires some knowledge of evaluation and mental health

Guide 1: evidence-based practice

Guide 2: measuring success

Guide 3: getting results

Guide 4: making an impact

National Health Scotland: evaluation

‘Monitor progress and evaluate’ is a website providing links to the Outcomes Framework website and further resources. The outcomes framework provides interactive resources to help link activities to outcomes for a number of areas (reducing alcohol-related harm, tobacco control, mental health improvement, health and work, healthy weight, and national parenting strategy). Frameworks include a series of tools: Outcomes triangle Logic models, and results chains.

Review details

Type of document/accessibility: website

Date: current

Lead author/organisation and source: NHS Scotland

Themes

Background to evaluation: Evaluability. Pre-evaluation preparatory work: Logic modelling. Evaluation processes: Choosing outcomes. Types of evaluation: Overview of types of evaluation, Process evaluation, Outcome evaluation. Additional support: Tools and toolkits.

Purpose and utility of guidance

To support NHS workers with support in outcome planning and evaluation processes.

Primary target audience

Local partners in the NHS.

Contextual information

The team evaluate Scotland’s policies and programs (intended to improve health and reduce health inequalities) to generate a better understanding of how they work, who they reach and what effects they have.

Summary/overview

The website provides links to the Outcomes Framework website and further resources.

The Outcomes Framework provides interactive resources to help link activities to outcomes for a number of areas:

  • reducing alcohol-related harm
  • tobacco control
  • mental health improvement
  • health and work
  • healthy weight
  • national parenting strategy

Frameworks include a series of tools: outcomes triangle, logic models and results chains.

Strengths

Strengths include:

  • practical resources are provided to support evaluators identify and evaluate links between what is done and what is achieved.
  • useful for those who are interested in evaluating a program on reducing alcohol related harm, tobacco control, mental health improvement, health and work, healthy weight, and national parenting strategy.
  • simple to understand
  • interactive
  • glossary

Limitations

Limitations include:

  • focus is on processes rather than evaluation as a whole
  • only useful to those areas covered

NICE Behaviour change: principles for effective interventions

The NICE behaviour change: principles for effective interventions (2007) is NICE UK’s formal guidance on generic principles that should be used as the basis of initiatives to support attitude and behaviour change. The evaluation section focuses on community evaluation and what needs to be in place to carry this out.

Review details

Lead author/organisation and source: National Institute for Health and Care Excellence (NICE)

Date: October 2007

Themes

Purpose and utility of guidance

This guidance provides a set of generic principles that can be used as the basis for planning, delivering and evaluating public health activities aimed at changing health-related behaviours.

Primary target audience

The guidance is for NHS and non-NHS professionals and others who have a direct or indirect role in, and responsibility for, helping people change their health-related knowledge, attitudes and behaviour. This includes national policymakers in health and related sectors (including those with a responsibility for planning or commissioning media, marketing or other campaigns), commissioners, providers and practitioners in the NHS, local government, the community and voluntary sectors.

It is also relevant for the research community (including those who oversee research funding), social and behavioural scientists and health economists working in the area of health-related knowledge, attitude and behaviour change.

Contextual information

NICE is a UK government agency which provides national guidance and advice to improve health and social care. The guide was developed to fill a void in that, prior to this guide, there was no strategic approach to behaviour change across government, the NHS or other sectors, and many different models, methods and theories were used in an uncoordinated way.

This guidance provides a systematic, coherent and evidence-based approach, considering generic principles for changing people’s health-related knowledge, attitudes and behaviour, at individual, community and population levels.

Summary/overview

This document is divided into different sections describing recommended actions for:

  • planning
  • delivery
  • evaluation
  • implementation of interventions to change behaviour
  • recommendations for research

It includes:

  • background on health inequalities and changing behaviour
  • overview of what is required in planning an intervention for behaviour change and this is also required for evaluation
  • emphasis on the need to work with the community to ensure an intervention will work in that setting and to avoid stigmatising behaviour

Focusing on the smaller element of the document specifically concerned with evaluation, the guide states that:

  • time and resources should be set aside for evaluation
  • the size and nature of the intervention, its aims and objectives and the underlying theory of change used should determine the form of evaluation
  • distinction between monitoring and evaluation
  • complex interventions can be evaluated if a staged approach is used
  • the need for formal outcome and process evaluation despite how challenging this can be
  • an effective evaluation is based on clearly defined outcome measures - at individual, community and population levels, as appropriate
  • qualitative research looking at the experience, meaning and value of changes to individuals may also be appropriate (methods and outcome measures are identified during the planning phase)

Strengths

Strengths include:

  • rather than focussing on a specific type of public health intervention, it focusses on how to change health-related behaviours of people using individual, community or population level interventions
  • specific sets of theories are provided on which the recommendations are based, with abundant references for people who want to explore these
  • very clear outline in which every chapter, in a bullet point-type approach described the target audience and recommended actions for different type of interventions

Limitations

Limitations include:

  • a relatively short document, which is an advantage, but at the same time describes generic recommendations only
  • little of this is focused on evaluation
  • difficult for one document to meet the needs of such a wide audience
  • is not a stand-alone guide, but needs to be supplemented with more specific/detailed reading

WHO, Seventh futures forum on unpopular decisions in public health

NICE, Behaviour change: general approaches

PHE: evaluation of weight management, physical activity and dietary interventions

The Evaluation of weight management, physical activity and dietary interventions: an introductory guide document follows the publication of 3 standard evaluation frameworks for weight management interventions. It is intended to provide an overview of evaluation which complements the finer detail provided in the frameworks.

Review details

Type of document/ accessibility: online PDF

Date: 2015. Length: 35

Lead author/organisation and source: NHS National Observatory on Obesity, PHE

Themes

Background to evaluation: Overview of evaluation. Pre-evaluation preparatory work: Ethics, Needs assessment, Evaluation planning, Logic modelling. Evaluation processes: Overview of evaluation processes, Defining questions, Choosing outcomes, Describing the intervention, Data collection, Data analysis and interpretation. Types of evaluation: Process evaluation, Outcome evaluation, Fidelity. Additional support: Tools and toolkits.

Purpose and utility of guidance

This guide provides a general introduction to the evaluation of public health programs. It is described as a useful first step for anyone new to the topic or those intending to refresh their knowledge. It is intended to be used alongside the standard evaluation frameworks (below); which detail the information and data that should be collected.

Primary target audience

Primarily for practitioners interested in evaluation of physical activity, weight management and dietary programs.

Contextual information

The aim of the National Obesity Observatory (NOO) is to signpost and report on obesity and related surveillance data. The aim of this series of guides is to ensure results of interventions are comparable across settings, populations and types of intervention. This means that public health commissioning can be carried out more effectively, which is important in a time of restricted public finances.

The current version of CoRE covers:

  • NOO’s standard evaluation frameworks (SEF)
  • evaluation data collection tool (including details of local interventions)
  • other evaluation guidance
  • reports from evaluation of nationally-initiated schemes and evaluation websites

The National Obesity Observatory (NOO) is now part of Public Health England (PHE), an executive agency of the Department of Health. The PHE obesity website provides a single point of contact for wide-ranging authoritative information on data, evaluation, evidence and research related to weight status and its determinants.

PHE have developed 3 standard evaluation frameworks for weight management interventions, physical activity and dietary interventions as well as a collection of resources on evaluation (CoRE). This document builds on the ‘Standard evaluation framework (SEF) for weight management interventions’, published April 2009. It takes the principles described in the original SEF and applies them to dietary interventions.

Summary/overview

This short document provides an overview of evaluation including:

  1. What is evaluation and why is it important?
  2. Evaluation questions
  3. An overview of process and outcome evaluations
  4. Where to seek help
  5. A step by step guide to evaluation – the evaluation cycle and 6 main phases of evaluation:
  • planning: including stakeholder involvement and budgeting for an evaluation
  • clarifying objectives: including the development of logic models
  • selecting indicators: including process, short, medium and long term indicators
  • choosing methods and data collection: including formative, process and outcome evaluation (data collection methods including qualitative and quantitative data, and ethics are discussed)
  • analysing data
  • reflecting and sharing: including dissemination approaches
  1. An evaluation checklist
  2. Further reading

Strengths

Strengths include:

  • provides an excellent introduction to evaluation
  • simple and clear to use
  • covers all main elements involved in evaluation
  • practical and relevant to real-world settings
  • provides links to checklists and standard frameworks for dietary interventions
  • provides links to further reading

Limitations

Intended to be used by those working in obesity.

PHE: a collection of resources on evaluation (CoRE)

Review details

Type of document/accessibility: webpage

Date: 2016

Lead author/organisation: National Obesity Observatory (now part of Public Health England)

Themes

Additional support: Training.

Purpose and utility of guidance

To provide information and resources to support practitioners with an interest in the evaluation of interventions related to obesity, overweight, underweight and their determinants.

Primary target audience

Policy makers and practitioners involved in obesity and related issues.

Contextual information

PHE aims to improve the health and wellbeing of the nation, and reduce health inequalities. This document follows the publication of 3 standard evaluation frameworks for weight management interventions. It is intended to provide an overview of evaluation which compliments the finer detail provided in the frameworks.

Summary/overview

CoRE is divided into 7 sections:

  • standard evaluation frameworks
  • evaluation data collection tool
  • database of interventions
  • evaluation guidance
  • evaluation reports
  • evaluation websites
  • evaluation training

The ‘standard evaluation frameworks’ section notes 3 SEFs:

  • weight management Interventions (2009)
  • physical activity(PA) Interventions (2012)
  • dietary interventions (2012)

The aim of the SEF is to support high quality, consistent evaluation of weight management, diet and PA interventions for the evidence base. The SEFs provide introductory guidance on the principles of evaluation, list/describe ‘essential’ and ‘desirable’ criteria, information on collecting data and identification of the target audience. A link is given to a poster presented at the PHE Conference 2013.

The ‘evaluation data collection tool’ section describes:

  • what the tool is for
  • why it should be used
  • type of intervention to be included
  • what happens to information provided
  • what sort of tool it is
  • details on accessing and using the tool

There is also a link to obesity case study examples.

The ‘database of interventions’ section is the first national database for weight management, diet and PA interventions, providing information on programmes that have been submitted via the NOO evaluation data collection tool since 2011. It has 4 dropdown menus:

  • geography
  • intervention setting
  • age group
  • included Interventions

Links are provided to obesity case study examples; and information on obesity interventions in Ireland and Northern Ireland.

The ‘Evaluation guidance’ section provides information/tools for planning the evaluation of interventions that, either directly or indirectly, prevent or reduce obesity.;9 are listed, with online links:

  • ‘Framework of outcome measures recommended for use in the evaluation of childhood obesity treatment interventions: the CoRe framework’.

  • ‘Evaluation reports’ has links to reports on the findings from the evaluation of nationally-led programmes that influence obesity or its determinants grouped under headings:

  • behaviour change
  • physical activity
  • healthy eating
  • healthy children and families
  • health inequalities/healthy communities

Links are also given to other evaluation reports, case studies and intervention, to be found on PHE CoRE pages.

The ‘evaluation websites’ section contains information on, and links to, the evaluation of public health programmes from:

  • NHS Scotland
  • US Centre for Disease Control (CDC)
  • Evaluation Working Group Resources
  • National Institute for Health and Clinical Excellence (NICE)
  • UK Clinical Research Collaboration (CRC)
  • Public Health Centres of Excellence
  • Social and Public Health Sciences Unit
  • Evaluating the Health Effects of Social Interventions

The ‘evaluation training’ section identifies 3 training sessions in understanding evaluation in public health, on weight management, PA, and dietary interventions (2014). Links are provided for more information.

Strengths

Strengths include:

  • clear, well laid out
  • has a section where readers can search for interventions by target age group, geographical area, type of intervention
  • links to frameworks data collection tool, guidance, reports, websites training, further information

Limitations

Limitations include:

  • would be useful if it mentioned obesity in title
  • specific to obesity
  • UK focused

PHE obesity on Twitter

NOO slidesets archive

NOO knowledge updates archive

Register for the Public Health England Obesity Knowledge and Intelligence team mailing list

Additional comments

This site aims to collect resources together for the reader to search rather than providing these resources.

Standard evaluation framework: dietary interventions

The standard evaluation framework for interventions targeting physical activity aims to describe and explain the information that should be collected in any evaluation of an intervention that aims to improve dietary intake or associated behaviour. It is aimed at programme managers and commissioners.

Review details

Date: September 2012

Length: 40

Lead author/organisation: NHS National observatory on obesity

Themes

Background to evaluation: Overview of evaluation. Pre-evaluation preparatory work: Ethics, Needs assessment, Evaluation planning, Logic modelling. Evaluation processes; Overview of evaluation processes, Defining questions, Choosing outcomes, describing the intervention, Data collection, Data analysis and interpretation. Types of evaluation: Process evaluation, Outcome evaluation, Fidelity. Additional support: Tools and toolkits.

Purpose and utility of guidance

Guidance on collecting information for evaluation of dietary interventions.

Primary target audience

The target audiences for this document are:

  • commissioners or managers of weight management interventions with a dietary component
  • commissioners or managers of dietary interventions
  • obesity and diet leads in local authorities
  • practitioners running weight management interventions with a dietary component
  • evaluators of dietary interventions or weight management interventions with a dietary element

Summary/overview

The aim of the standard evaluation frameworks (SEF) is to support high quality, consistent evaluation of weight management, diet and physical activity interventions in order to increase the evidence base.

The guidance document’s ‘Introduction’ (Section 1) identifies the aims of the document, and discusses why a dietary SEF is necessary.

Section 2 on ‘principles of evaluation’ identifies 2 core evaluation questions: what are the objectives? How will they be measured? It then differentiates between primary and secondary outcome measures, illustrating with example projects/objectives and discusses the utility of ‘logic models’ to identify primary and secondary measurement indicators.

Section 3 details selecting and measuring outcomes: identifies factors important when choosing an outcome indicator; identifies 4main categories of outcome when dietary intake is the primary outcome:

  • intake of a particular food or food group
  • intake of a particular nutrient
  • overall energy intake
  • meeting of dietary recommendations

It then discusses different options for measuring dietary intake. Different methods to measure outcomes are described with pros and cons identified. Potential sources of error and bias are outlined and further resources/toolkits on dietary assessment and measurement are identified. A flow chart illustrates the process to be followed when planning an intervention.

Section 4 sets out the SEF and presents criteria necessary to undertake a comprehensive and robust dietary evaluation. Fifty-two essential and desirable criteria are listed. Essential criteria are presented as the minimum recommended data for evaluation purposes. Desirable criteria are additional data that would enhance the evaluation. For example, it is essential to have data on the primary and secondary aims and objectives, intervention timescales, and descriptions of the intervention.

Desirable criteria include a rationale for the intervention (including theoretical basis and logic model) and duration of funding. The 52 criteria relate to each stage of the evaluation from title/name of the evaluation to dissemination of the findings. Explanatory notes (section 5) are accessed by clicking on the links to the interactive document.

Strengths

Strengths include:

  • very accessible: interactive, includes visuals such as flow charts
  • clear about purpose and target audience
  • clearly written and takes reader through each stage with examples
  • includes information on collecting the cost of an intervention as well as the resources and skills required
  • provides evaluation support to a wide range of professionals involved in dietary interventions (it is intended to increase the number and quality of evaluations of such programs)
  • may contribute to the development of a core dataset that would increase the comparability of evaluations
  • gives a basic overview of evaluation so users have a minimum understanding prior to using the framework
  • provides a checklist distinguishing essential and desirable criteria (allows users to prioritise when resources are scarce)
  • suited to a range of dietary interventions such as individual and group based
  • a wide range of stakeholders including practitioners and academics were heavily involved in the development
  • includes a helpful glossary of terms
  • includes policy context
  • focus on sustainability
  • discusses generalisability

Limitations

Limitations include:

  • checklist approaches may be too simplistic for complex interventions, specific situations may need to adapt essential and desirable criteria
  • specific to interventions for dietary intake
  • offers a basic overview of evaluation that may be insufficient for users to conduct high quality evaluations with no additional help
  • members of the public were not involved in development of the SEF
  • bias: does not identify the potential for professional bias, such as when constructing or asking survey/interview questions
  • essential/desirable process criteria does not include ‘participant motivation’ to take part in (opt-in) the intervention (opt-out data is essential)
  • need for qualitative guidance not identified
  • qualitative research definition in the glossary limited
  • UK/England focused
  • it does not provide an introduction to evaluation but refers to where one can be found
  • it is aimed at interventions that work at individual or group level, not at population level
  • focuses on food guidelines

Collection of resources on evaluation - CoRE

Additional comments

Read alongside the other Public Health England documents on evaluation particularly the SEF on physical activity.

Standard evaluation framework: physical activity interventions

The standard evaluation framework for interventions targeting physical activity aims to describe and explain the information that should be collected in any evaluation of an intervention that aims to improve dietary intake or associated behaviour. It is aimed at programme managers and commissioners.

Review details

Date: September 2012

Length: 39 pages

Lead author/organisation: NHS National Obesity Observatory (NOO)

Themes

Background to evaluation: Overview of evaluation. Pre-evaluation preparatory work: Ethics, Needs assessment, Evaluation planning, Logic modelling. Evaluation processes: Overview of evaluation processes, Defining questions, Choosing outcomes, Describing the intervention, Data collection, Data analysis and interpretation. Type of evaluation: Process evaluation, Outcome evaluation, Fidelity. Additional support: Tools and toolkits.

Purpose and utility of guidance

Builds on the standard evaluation framework (SEF) for weight management interventions, published by NOO in April 2009, applying these principles to physical activity interventions. The SEF contains a list of ‘essential’ and ‘desirable’ criteria for data required for a comprehensive and robust evaluation.

Primary target audience

The target audiences for this document are:

  • commissioners or managers of weight management or obesity prevention interventions with a physical activity element
  • commissioners or managers of physical activity interventions
  • physical activity or sport and leisure leads in local authorities
  • commissioners or managers of active travel projects
  • practitioners running physical activity or active travel projects

Summary/overview

The aim of the standard evaluation frameworks (SEF) is to support high quality, consistent evaluation of weight management, diet and physical activity interventions in order to increase the evidence base.

Section 1 ‘Introduction’:

  • identifies the aims of the document
  • defines physical activity
  • discusses why a SEF in physical activity is necessary.

Section 2 ‘Principles of evaluation’:

  • identifies 2 core evaluation questions: what are the objectives and how will they be measured?
  • differentiates between primary and secondary outcome measures, illustrating with example projects/objectives
  • discusses the utility of ‘logic models’ to identify primary and secondary measurement indicators

Section 3:

  • details selecting and measuring outcomes
  • identifies 4main ways to classify physical activity: frequency, intensity, time, type (FITT)
  • discusses different options for measuring physical activity
  • outlines potential sources of error and bias
  • considers existing datasets for evaluation purposes
  • identifies further resources (tools) on physical activity assessment

A flow chart illustrates the process to be followed when planning an intervention.

Section 4 sets out the SEF and presents 52 essential and desirable criteria needed to undertake a comprehensive and robust evaluation.

Essential criteria are presented as the minimum recommended data for evaluation purposes. Desirable criteria are additional data that would enhance the evaluation. For example, it is essential to have data on the primary and secondary aims and objectives, intervention timescales, and descriptions of the intervention.

Desirable criteria include a rationale for the intervention (including theoretical basis and logic model) and duration of funding.

The 52 criteria relate to each stage of the evaluation – from title/name of the intervention to analysis and interpretation. Explanatory notes (Section 5) are accessed by clicking on the links in the interactive document.

Strengths

Strengths include:

  • provides evaluation support to a wide range of professionals involved in planning and evaluating physical activity projects and interventions (it is intended to increase the number and quality of evaluations of such programs)
  • may also contribute to the development of a core dataset that would increase the comparability of evaluations
  • gives a basic overview of evaluation so users have a minimum understanding prior to using the framework;
  • provides a useful checklist distinguishing essential and desirable criteria (this allows users to prioritise when resources are scarce)
  • suited to a range of physical activity interventions; including individual and group-based interventions
  • a wide range of stakeholders including practitioners and academics were heavily involved in the development
  • includes a helpful glossary of terms
  • accessible, interactive, includes visuals such as flowcharts and is 39 pages long
  • clear about purpose and target audience
  • clearly written and takes reader through each stage with examples
  • contains rapid review of physical activity measurements
  • includes information on collecting the cost of an intervention as well as the resources and skills required
  • includes policy context
  • focus on sustainability
  • discusses generalisability

Limitations

Limitations include:

  • UK/England focused
  • it does not provide an introduction to evaluation but refers to where one can be found
  • focuses on physical activity programmes
  • it is aimed at interventions that work at individual or group level, not at population level, for example those programmes which involve changing the physical environment
  • checklist approaches may be too simplistic for the nuances of complex interventions. Individual situations may require modifications to the essential and desirable criteria
  • the framework is specific to interventions for physical activity
  • the document offers a very basic overview of evaluation and may be insufficient for practitioners to conduct high-quality evaluations without additional help
  • members of the public were not involved in the development of the framework
  • the document does not provide an introduction to the concepts of evaluation
  • the section on potential bias refers to ‘respondent’ bias and does not identify the potential for professional bias such as when constructing or asking survey/interview questions
  • essential/desirable process criteria does not include ‘participant motivation’ to take part in (opt-in) the intervention (whereas opt-out data is essential)
  • does not identify need for qualitative guidance
  • qualitative research definition in the glossary is limited

Additional comments

Accompanies the dietary guideline. Should be read alongside other evaluation work from NOO.

Standard evaluation framework: weight management interventions

The standard evaluation framework for weight management interventions aims to describe and explain the information that should be collected in any evaluation of an intervention that aims to support weight management programmes intake or associated behaviour. It is aimed at programme managers and commissioners.

Review details

Date: 2009

Length: 60 pages

Lead author/organisation: National Obesity Observatory, Public Health England

Themes

Background to evaluation: Overview of evaluation. Pre-evaluation preparatory work: Ethics, Needs assessment, Evaluation planning, logic modelling. Evaluation processes: Overview of evaluation processes, Defining questions, Choosing outcomes, Describing the intervention, Data collection, Data analysis and interpretation. Types of evaluation: Process evaluation, Outcome evaluation, Fidelity. Additional support: Tools and toolkits.

Purpose and utility of guidance

This document should be used when planning an evaluation of a weight management intervention.

Primary target audience

Local authorities and clinical commissioning groups, other organisations running weight management interventions and evaluators.

Summary/overview

The aim of the Standard Evaluation Frameworks (SEF) is to support high quality, consistent evaluation of weight management, diet and physical activity interventions in order to increase the evidence base.

The guidance document starts with a section entitled ‘An introduction to evaluation.’ This section starts with a discussion of why evaluations are necessary, moving on to how to specify research questions, aims and objectives.

Definitions of types of evaluations and evaluation designs are discussed. The document then reviews:

  • choosing and measuring outcomes (including logic modelling)
  • methods of data collection
  • analysis and reporting
  • managing budgets
  • ethical issues

General principles (dos and don’ts) and a step by step guide are then provided.

The second section of the document is the framework. This includes 58 essential and desirable criteria that should be collected whilst evaluating weight management interventions. Essential criteria are presented as the minimum recommended data for evaluating a weight management intervention. Desirable criteria are additional data that would enhance the evaluation.

For example, the framework states that it is essential to have data on factors including the primary and secondary aims and objectives, intervention timescales, and descriptions of the intervention. However, theoretical underpinnings, and training needs are only desirable.

The 58 criteria relate to each stage of the evaluation, from title/name of the intervention to analysis and interpretation. Explanatory notes are accessed by clicking on the links in the interactive document.

Strengths

Strengths include:

  • clear and comprehensive tool which is accompanied by explanatory notes
  • signposted for readers with different levels of knowledge
  • aims to improve the standard of evaluation in this area.
  • aims to improve generalisability of results
  • PHE will be evaluating the usefulness and impact of the SEF and has asked readers for feedback
  • includes a consideration of cost of intervention per outcome which is often ignored
  • provides evaluation support to those in the field of weight loss
  • may also contribute to the development of a core dataset that would increase the comparability of evaluations
  • provides a basic overview of evaluation to ensure users have a minimum understanding prior to using the framework
  • provides a useful checklist that distinguishes between criteria that are essential, and those that are only desirable, enabling practitioners to prioritise when resources are scares
  • The SEF can be used with a range of weight management interventions; including individual, group, or community-based interventions
  • a wide range of stakeholders including practitioners and academics were heavily involved in the development.
  • is currently undergoing an evaluation to assess its utility

Limitations

Limitations include:

  • UK focused
  • obesity focused
  • checklist approaches may be too simplistic for the nuances of complex interventions. Individual situations may require modifications to the essential and desirable criteria
  • is specific only to interventions for weight management
  • offers a very basic overview of evaluation that may not be sufficient for practitioners to conduct high-quality evaluations without additional help

Collection of resources on evaluation

Standard evaluation framework for physical activity interventions

Standard evaluation framework for dietary interventions

The World Bank: monitoring and evaluation

The Monitoring and evaluation: some tools, methods and approaches document is an overview of a sample of tools, methods and approaches. A series of page long summaries presents the purpose and use, advantages and disadvantages, costs, skills, and time required to use them.

Review details

Date: 2004

Length: 26 pages

Lead author/organisation: The World Bank

Themes

Background to evaluation: Using theory in evaluation. Pre-evaluation preparatory work: Logic modelling. Evaluation processes: Choosing outcomes, Research design and methods, Data collection, Data analysis and interpretation. Types of evaluation: Outcome evaluation, Economic evaluation.

Purpose and utility of guidance

To strengthen awareness of monitoring and evaluation.

Primary target audience

Government officials, development managers, and civil society.

Contextual information

The World Bank’s official goal is to reduce poverty by providing financial assistance; and by supporting developing countries through policy advice, research analysis, and technical assistance.

Summary/overview

Topics include:

  • performance indicators
  • the logical framework approach
  • theory based evaluation
  • formal surveys
  • rapid appraisal methods
  • participatory methods
  • public expenditure tracking surveys
  • cost-benefit and cost-effectiveness analysis
  • impact evaluation

It is noted that this list is not comprehensive, or intended to be.

Strengths

Strengths include:

  • available in several languages
  • provides a good overview of monitoring and evaluation tools, methods and approaches, along with information about where to find further information
  • useful consideration of the purpose and use, advantages and disadvantages, costs, skills, and time required to use the different approaches
  • includes cost-effectiveness
  • includes expenditure tracking to outcomes
  • discusses cost of evaluation
  • refers to other World Bank evaluation sites

Limitations

Limitations include:

  • does not provide information on how to monitor or evaluate a program
  • title is vague
  • helpful to explain what the world bank does at the outset
  • helpful to state who the target audience are
  • should have placed this within the context of other monitoring and evaluation resources within International development, for example DFID, WHO, UN
  • why are certain tools chosen and where would the reader get advice on other tools?
  • not clear what the rationale is behind this document
  • closer to a glossary than a handbook
  • does not give stakeholder engagement the significance it requires

Treasury Board of Canada: program evaluation methods

Program evaluation methods: measurement and attribution of program results is aimed at those working in Federal Government in Canada, this is not a step-by-step approach but gives the background of evaluation and the pros and cons of doing things in different ways. It does not include new evaluation methodology but takes a more traditional research-based approach. It does include cost-benefit analyses.

Type of document/accessibility: online PDF

Undated. Length: 152 pages

Lead author/organisation: Treasury Board Canada

Themes

Evaluation processes: Research design and methods, Data collection, Data analysis and interpretation. Additional support; tools and toolkits.

Purpose and utility of guidance

To help practitioners understand the methodological considerations involved in measuring and assessing program outcomes.

Primary target audience

Practitioners and other interested parties.

Contextual information

The Treasury Board is a Cabinet committee of the Queen’s Privy Council of Canada. The Treasury Board is responsible for accountability and ethics, financial, personnel and administrative management, comptrollership, approving regulations and most Orders-in-Council.

Summary/overview

Chapter 1 is an introduction to evaluation. It discusses why evaluation is important in federal government and describes the phases involved in the evaluation process (evaluation assessment/planning; evaluation study; and decision making based on findings and recommendations).

It discusses evaluation issues (or what evaluation can be used to address) and methods for addressing them. For example, the document suggests that evaluation can be used to assess continued relevance; results, or cost-effectiveness.

The chapter differentiates between program theory issues (rationale behind programs) and program results (was the program effective). It highlights that both intended and unintended consequences should be considered, and suggests that two major problems need to be considered:

  • measurement problems
  • attributional problems (can attribute the results to the program)

Chapter 2 discusses the kinds of conclusions that can be drawn from an evaluation. It presents a conceptual framework for developing evaluation strategies and discusses various threats/issues that may arise:

  • measurement
  • attribution
  • feasibility
  • practical

It concludes by suggesting that multiple strategies generate the most credible conclusions.

Chapter 3 discusses evaluation designs. The chapter starts with an overview of the ‘ideal’ design (RCT), quasi-experimental designs, and implicit designs (before and after trials). Advantages and disadvantages of each are presented.

Chapter 4 discusses data collection methods. The chapter describes:

  • qualitative and quantitative approaches
  • longitudinal and cross-sectional approaches
  • subjective and objective data, and primary and secondary data

It also discusses 6 common data collection methods:

  • literature review
  • file review
  • natural observation
  • surveying
  • expert opinion
  • case studies

Chapter 5 provides information on analytical methods. This includes statistical analysis (descriptive and for inference), qualitative analysis, modelling and cost-effectiveness.

Strengths

Strengths include:

  • clearly written and easy to navigate
  • describes research methods needed for evaluation
  • goes beyond a superficial or cookbook approach to evaluation Emphasises the need for consulting people with experience in research and evaluation and gives further sources
  • describes the strengths and weaknesses of each method included
  • each chapter contains a summary and references to further information
  • provides a useful introduction to understanding evaluation, what can be learnt from it and different strategies and approaches
  • discusses issues surrounding validity and quality of trials
  • discusses data analysis issues
  • includes cost benefit analyses
  • good emphasis and explanation of causality

Limitations

Limitations include:

  • undated
  • many of the references are over 30years old and may be dated In terms of evaluation theory and practice
  • does not describe modern evaluation theory or approaches such as stakeholder engagement or logic model
  • seems dated, for example with the emphasis on the research focused approach
  • little information on qualitative approaches
  • the focus is more on understanding evaluation, rather than doing evaluation

Additional comments

Evaluation studies identified in this search seem to take a research perspective if they are older and a project management one if they are newer.

UK Evaluation Society: guidelines for good practice in evaluation

Guidelines for good practice in evaluation is a membership forum for those working in evaluation. It has generated guidelines to help commissioners, practitioners and participants establish good practice in the conduct of evaluation.

Review details

Date: ongoing updates

Lead author/organisation: UK Evaluation Society

Themes

Background to evaluation: Using theory in evaluation. Additional support: Training.

Purpose and utility of guidance

These guidelines assimilate a diverse set of principles for action in evaluation and are intended for use in any domain, discipline or context. They outline issues for consideration by a range of stakeholders involved in the evaluation process. In this way it is intended that practice is encapsulated from the point of view of evaluators themselves, commissioners, participants and those involved in self-evaluation in organisations.

Primary target audience

Commissioners, practitioners and participants working in evaluation.

Contextual information

This is the web page for members of the UK Evaluation Society which is for professional evaluators in the UK. This organisation exists to promote and improve the theory, practice, understanding and utilisation of evaluation and its contribution to public knowledge.

It was founded in 1994 and members include evaluation professionals, practitioners and evaluation commissioners from:

  • national government
  • local government
  • the research community
  • independent consultancies
  • the voluntary sector

Summary/overview

This resource:

  • provides information on the society and what it does as well as links to its publications
  • publishes a magazine on evaluation two or three times a year
  • contains background on good practice guidelines
  • contains evaluation jobs and tenders
  • contains information on news, training and conference
  • has networks in different parts of the UK
  • has a forum for opinions

The society provides a document entitled ‘Guidelines for good practice in evaluation’. This is a list of recommendations that evaluators and commissioners should adhere to. It is more of a checklist, and does not provide and information on how to achieve each point.

Strengths

Strengths include:

  • promotes links between evaluators
  • members from different professions and in different fields
  • guidelines grounded in practice and aimed to evolve
  • links to useful information in evaluation

Limitations

Limitations include:

  • UK focused
  • although claims to publish guidelines that are jargon-free, the description of these is written in very complex, jargon style and would benefit from being written in plain English
  • could be clearer about what it is for
  • not really about evaluation per se but a membership body for those working in evaluation

United Nations Evaluation Group: norms for evaluation in the UN system

The authors of this Norms for evaluation in the UN system wish to define norms for evaluation in the UN by facilitating system-wide collaboration on evaluation and to ensure that evaluations that happen in the UN follow agreed basic principles to strengthened and standardise evaluation.

Review details

Date: April 2005

Length: 11 pages

Lead author/organisation: United Nation Evaluation Group

Themes

Background to evaluation: Common challenges. Pre-evaluation preparatory work: Contracting and communications, Ethics. Evaluation processes: Research design and methods. Additional support: Training.

Primary target audience

Those working in any aspect of evaluation within the UN system.

Contextual information

The UN is an intergovernmental organization to promote international co-operation, particularly in:

  • global governance
  • consensus building
  • peace and security
  • justice and international law
  • non-discrimination and gender equity
  • sustained socio-economic development
  • sustainable development
  • fair trade
  • humanitarian action
  • crime prevention

The United Nations Evaluation Group (UNEG) is a group of professional practitioners who wished to define norms that contribute to the professionalization of evaluation and provide guidance to evaluation offices in their work.

Summary/overview

This document sets out:

  • definition: how evaluation is defined and how it differs from similar activities such as appraisal
  • responsible: which bodies are responsible for evaluation in the UN and how this is governed
  • policy: what each organisation should have in their evaluation policy
  • intentionality: a clear intention to use the findings and evaluations carried out at the right time (topics should be selected purposively)
  • impartiality: lack of bias and appropriate stakeholder involvement
  • independence: evaluator should be independent of policy making
  • evaluability: clear intent and sufficient material to contribute towards the evaluation
  • quality of evaluation: designed correctly and reported on clearly and consistently
  • competencies for evaluation: those carrying it out should have the correct skills
  • transparency and consultation: consultation with appropriate people and results open to public
  • evaluation ethic: Integrity sensitivity to gender and race
  • follow up to evaluation: commitment to follow up on results
  • contribution to knowledge: building lessons learned and accessible to correct people

Strengths

Strengths include:

  • having norms for UN evaluation helps consistency and comparison
  • no information on economic effectiveness
  • clear about what it is and how it can be used
  • good guide to what should be in place before evaluation begins in any organisation

Limitations

Limitations include:

  • could be usefully rewritten in plain English as very complex language
  • not a guide to evaluation

United Nations: handbook on planning, monitoring and evaluating for development results

Handbook on planning, monitoring and evaluating for development results seeks to address new directions in planning, monitoring and evaluation within the context of the United Nations Development Programme (UNDP) and evaluation within the UN system.

It focuses on enhancing the results based culture of UNDP and improve the quality of planning, monitoring and evaluation. It provides a basic understanding of this work within the UNDP context and knowledge to carry out effective evaluation and monitoring. It includes:

  • stakeholder engagement
  • planning
  • the results framework
  • planning for change
  • resources and capacity required
  • monitoring UNDP policy
  • dealing with data
  • evaluation policy
  • different methods of evaluation
  • how to learn, generate and disseminate knowledge from evaluation

Review details

Date: 2009 (update of 2002 guide)

Length: 232 pages

Lead author/organisation: United Nations Development Programme (UNDP)

Themes

Background to evaluation: Common challenges. Pre-evaluation preparatory work: Contracting and communicating, Needs assessment, Evaluation planning, Stakeholder involvement. Evaluation processes: Overview of evaluation processes, Defining questions, Choosing outcomes, Describing the intervention, Research design and methodology, Data collection, Data analysis and interpretation. Types of evaluation: Outcome evaluation. Additional support: Quality assurance.

Purpose and utility of guidance

To strengthen a culture of evaluation.

Primary target audience

Those working in UN development programmes or in other development work who will be involved in evaluation but do not necessarily have evaluation experience or knowledge.

Contextual information

The UNDP is active in 166 countries around the world. Their remit is to support national capacity development for poverty reduction and the attainment of the Millennium Development Goals

Summary/overview

This document sets out:

  • stakeholder engagement
  • planning
  • results framework
  • planning for change
  • resources and capacity required
  • monitoring UNDP policy
  • dealing with data
  • evaluation policy
  • different methods of evaluation
  • how to learn, generate and disseminate knowledge from evaluation

Chapter 1 describes the purpose of planning, monitoring and evaluation, provides definitions and principles that are integral to planning, monitoring, and evaluation (including ownership, engagement of stakeholders, focus on results, and focus on development).

Chapter 2 provides step-by-step guidance on how to undertake planning for results. It suggests 8 main deliverables should be produced from the planning stage:

  • outline of activities, schedule and cost
  • stakeholder influence and importance matrix
  • list of problems identified
  • prioritized list of problems
  • cause-effect diagram or problem tree analysis for each prioritized problems
  • vision statement for each prioritized problem
  • results map for each prioritized problem
  • results framework for the programme or project document

Chapter 3 is about planning for monitoring and evaluation. This concerns setting up the systems and processes necessary to ensure the intended results are achieved as planned. The chapter provides guidance on the planning and preparations for effective monitoring and evaluation of such development plans. Plans should include:

  • what is to be monitored and evaluated
  • the activities needed to monitor and evaluate
  • who is responsible for monitoring and evaluation activities
  • when monitoring and evaluation activities are planned (timing)
  • how monitoring and evaluation are carried out (methods)
  • what resources are required and where they are committed

Chapter 4 provides a detailed step by step guide on how to implement monitoring.

Chapter 5 describes:

  • why evaluation is important for UNDP
  • how evaluative information should be used
  • the UNDP evaluation policy
  • types of evaluations that are commonly conducted in UNDP
  • main roles and responsibilities in evaluation
  • evaluation requirements as stipulated in the evaluation policy

Chapter 6 describes the steps involved in preparing for and managing an evaluation including:

  • initiating the evaluation process
  • preparation
  • managing the process
  • using the evaluation

Chapter 7 is about ensuring quality.

Chapter 8 is about enhancing the use of knowledge from monitoring and evaluation.

The appendix has a series of practical templates to support evaluation.

Further support on a number of issues are provided in companion documents:

Guidance on outcome level evaluation

Guidance for conducting evaluations of UNDP-supported global environment facility financed projects

Impact evaluation

Strengths

Strengths include:

  • available in several languages
  • really good overview of evaluation in development work which takes readers through the programme cycle of evaluation.
  • based on consultation with experts and stakeholder including a series of workshops
  • useful annexes which support the evaluation process in a step by step way.
  • emphasises ownership of evaluation methods by those who are working in the programme.
  • includes some in-country examples.
  • provides a good summary of different types of evaluation

Limitations

Limitations include:

  • repeats much of what can be found elsewhere
  • no reference to economic evaluation or cost-effectiveness

UN sustainable development goals

United Nations: overview of violence against women and girls (2012)

UN overview of violence against women and girls (2012) provides information and guidance for programming to address violence against women and girls. A module on monitoring and evaluation is included. This module includes information on the background to monitoring and evaluation, how to conduct it, and specific issues for different areas of work.

Review details

Date: 2012

Lead author/organisation: United Nations

Themes

Background to evaluation: Overview of evaluation, Common challenges, Using theory in evaluation. Pre-evaluation preparatory work: Needs assessment, Evaluation planning, Logic modelling. Evaluation processes: Choosing outcomes, Describing the intervention, Research design and methods, Data collection, Data analysis and interpretation. Types of evaluation: Outcome evaluation.

Purpose and utility of guidance

The site’s resources include a module on evaluation.

Primary target audience

Those working worldwide in any area related to ending violence on women and girls, including those working in the field as well as researchers and policymakers and those who are victims of such violence.

Contextual information

The United Nations is an organisation which was formed after the second world war to maintain international peace and security. The primary aim of the site is to encourage and support evidence-based programming to more efficiently and effectively design, implement, monitor and evaluate initiatives to prevent and respond to violence against women and girls.

The website contains:

  • a step by step programme guidance for those working in campaigns related to ending violence against women and girls
  • statistics on gender violence throughout the world
  • detailed resources for implementation including how to carry out a campaign, what educational materials to provide, and different approaches that can be used
  • specific information about events and training sessions and further help

All of these aim to support those who are managing, monitoring and evaluating campaigns in this area. It also provides sources of support throughout the world for those who are victims of gendered violence

Summary/overview

A specific module on monitoring and evaluation provides information on:

  • what monitoring and evaluation is, why it is important and challenges that are likely to be faced
  • how to prepare for monitoring and evaluation; including needs assessments, frameworks (conceptual and logic), choosing indicators, and monitoring and evaluation plans
  • conducting monitoring and evaluation; what questions to ask, how to conduct baseline assessments (qualitative and quantitative), monitoring, outcome and impact
  • monitoring and evaluation for specific areas of work (health, justice, community mobilisation, conflict/post-conflict and emergency)

Strengths

Strengths include:

  • clear and practical step by step guide to the main processes involved in planning and conducting an evaluation
  • world-wide focused
  • available in English, Spanish and French
  • easy to navigate
  • links are provided throughout to additional material

Limitations

  • does not set out what it is or who the target audience are
  • would be helpful if the material was also available as a PDF
  • specifically aimed at those working in gendered violence
  • does not include economic evaluation

Additional comments

Has a section on monitoring and evaluation but that is not the primary purpose of this document.

US Department of Health and Human Services: global health evidence evaluation framework (white paper)

Global health evidence evaluation framework (white paper) is aimed at an academic and clinical audience rather than those who are new to evaluation. It aims to explore the problem that different monitoring frameworks can give different results. It tested different monitoring methods on different areas of public health.

The authors conclude that existing frameworks for the assessment of public health evidence do not deliver major pieces of information to inform best practices for community and large-scale global health programmes. In particular there is a lack of information on implementation and sustainability.

Review details

Date: January 2013

Length: 149 pages

Lead author/organisation: Agency for Healthcare Research and Quality (AHRQ) (prepared by the Southern Californian Evidence Based Practice Centre (EPC) under contract to AHRQ). Part of a USA International Development Programme (USAID)

Themes

Background to evaluation: Assessing the evidence, Policy, Evaluation.

Purpose and utility of guidance

To improve the quality of healthcare by helping decision makers to make well-informed decisions in conjunction with clinical judgment and other information on resources and patients circumstances. This document may be used as the basis for development of clinical guidelines and reimbursement and coverage policy. This project aimed to develop an evidence framework in order to inform global health policy

Primary target audience

Healthcare decision makers: patients and clinicians, health system leaders and policymakers.

Contextual information

The AHRQ is a US agency which sponsors the development of evidence reports and technology assessments to improve the quality of healthcare in the United States.

These provide organizations with comprehensive, science-based information on common, costly medical conditions and new health care technologies and strategies.

AHRQ expects that the evidence-based Practice Centre evidence reports and technology assessments will inform individual health plans, providers, and purchasers as well as the health care system as a whole by providing important information to help improve health care quality. The reports undergo peer review prior to their release as a final report.

Summary/overview

An extensive literature search of published and grey literature was carried out with the input from a multidisciplinary Technical Expert Panel (TEP).

This identified 6 existing evidence frameworks for public health/global health interventions and applied these frameworks to the evidence bases for 3 exemplar interventions:

  • household water chlorination
  • prevention of mother-to-child transmission of HIV
  • lay community health workers to reduce child mortality

The findings identified that there was a gap in reporting information once an intervention had been implemented. The authors then identified 10 criteria that covered areas identified by the expert panel. These were then tested on 3 published articles for effectiveness on each of the three exemplar interventions.

The results showed that assessment of the same evidence could give different results regarding the strength of that evidence depending on which framework was used. All frameworks focused on efficacy and or effectiveness and often focused on allocation methods of study participants rather than other methods of study quality such as implementation. None explicitly assessed costs or sustainability. Incorporating insights from other frameworks helped to address these gaps. This made it difficult for policymakers to make judgment about the sensitivity of effectiveness to differences in context, and the scalability and sustainability of the intervention.

The authors concluded that existing frameworks for the assessment of public health evidence do not deliver main pieces of information to inform best practices for community and large-scale global health programs. In particular, there is a lack of information on implementation and sustainability.

Strengths

Strengths include:

  • structured as a short research report with references and annexes for further information
  • states clearly the purpose of the organisation and of this work
  • clear, logical and thorough
  • comprehensive search and use of expert panel

Limitations

Limitations include:

  • aimed at an academic / clinical audience and not for helping people to carry out evaluation
  • specifically for global health policy although it might be useful in other settings

AHRQ, Agency for Healthcare Research and Quality

Additional comments

This is of a different nature to most of the studies included in this search as it aims to develop a framework rather than support people carrying out evaluations. It evaluated existing frameworks and assessed their strengths and limitations. Might be useful to read in conjunction with other literature in the field of international development found in this search.

United States Department of Health and Human Services: guide to analysing the cost-effectiveness of community public health prevention approaches

Guide to analysing the cost-effectiveness of community public health prevention approaches is intended to be used to support the conduct of cost-effectiveness analyses. It focuses on the main questions and issues that arise – and provides a list of resources for anyone who requires further guidance or depth of information.

Review details

Date: 2006. Length: 94

Lead author/organisation: Amanda A. Honeycutt, PhD, Laurel Clayton, BA, Olga Khavjou, MA, Eric A. Finkelstein, PhD, Malavika Prabhu, BS, Jonathan L. Blitstein, PhD, W. Douglas Evans, PhD, Jeanette M. Renaud, PhD

Themes

Pre-evaluation preparatory work: Budgeting. Evaluation Processes: Defining questions, Choosing outcomes, Describing the intervention, Research design and methods, Data collection, Data analysis and interpretation. Types of evaluation: Economic evaluation

Purpose and utility of guidance

To support program managers and evaluators understand, design and perform cost effectiveness evaluations of community based public health initiatives.

Primary target audience

Program managers and evaluators of community based public health programmes.

Contextual information

The Assistant Secretary for Planning and Evaluation (ASPE) advises the Secretary of the Department of Health and Human Services on policy development in many areas including health, and provides advice and analysis on economic policy. The ASPE is tasked with coordinating the Department’s evaluation, research and demonstration activities. The ASPE conducts research and evaluation studies; develops policy analyses; and estimates the cost and benefits of policy alternatives under consideration by the Department or Congress.

Summary/overview

A guide to conducting cost-effectiveness analyses.

Chapter 2 discusses planning for a cost-effectiveness study. It covers all activities from defining the study question, determining the time frame and intervention, and selecting the type of economic study to conduct (particularly, cost analysis, cost effectiveness and cost-benefit).

Chapter 3 covers identifying and measuring outcomes. It begins with an overview of existing large-scale community programs, and provides advice on how to select outcomes of such programs.

Chapter 4 covers identifying and quantifying program costs.

Chapter 5 outlines the process involved in conducting a cost-effectiveness study (with examples).

Chapter 6 discusses how to use the results of a cost-effectiveness analysis.

Strengths

Strengths include:

  • simple and straightforward language and easy to use
  • useful for those new to cost effectiveness analysis
  • presented in a logical way; outlining all major steps involved
  • provides useful summaries, checklists, and examples
  • provides a resource list for those wishing to extend knowledge further

Limitations

Limitations include:

  • fairly simplistic, may require evaluators to use outside resources
  • largely intended for the evaluation of community based evaluations in the US (although many of the techniques are applicable outside of this area)

US Department of Health and Human Services: the program managers guide to evaluation (second edition)

The program managers guide to evaluation (second edition) is a step-by-step guide to evaluation for program managers without evaluation experience, particularly for those working in the area of children and families.

Review details

Date: 2010

Length: 122 pages

Lead author/organisation: Office of Planning, Research and Evaluation; Administration for Children and Families; US Department of Health and Human Services

Themes

Background to evaluation: Overview of evaluation, Common challenges, Using theory in evaluation. Pre-evaluation preparatory work: Budgeting, Contracting and communication, Needs assessment, Evaluation planning, Logic modelling. Evaluation processes: Overview of evaluation processes, Defining questions, Choosing outcomes, Describing the intervention, Research design and methods, Data collection, Data analysis and interpretation. Additional support: Tools and toolkits, Hiring an evaluator.

Purpose and utility of guidance

To support program managers in planning and implementing a program evaluation. It is not intended to turn people into evaluators but to support them when working with evaluators - either externally or internally.

Primary target audience

Program managers working in the area of children, youth, and families. However, the manual is generic and can be used for many types of evaluation.

Contextual information

The Office of Planning, Research and Evaluation (OPRE) is part of the Administration for Children and Families (ACF) division of the US Department of Health and Human Services.

The OPRE is responsible for performance management within the ACF. The OPRE:

  • conducts research and policy analysis
  • develops and oversees research projects
  • provides guidance and assistance on research and evaluation methods
  • statistical analysis
  • policy and program analysis
  • dissemination of findings

Summary/overview

The document is designed so that each chapter addresses a specific step in the evaluation process.

The document starts with an overview of evaluation;

  • what it is
  • what questions it can answer
  • what is involved in conducting an evaluation
  • what an evaluation will cost

The 2 chapters discuss external evaluators: who to choose and how to manage them.

The following chapter discusses preparation for evaluation:

  • deciding what to evaluate
  • developing a logic model
  • stating implementation and outcomes in measurable terms
  • identifying the context

The next chapter explains what to include in an evaluation. This contains information on:

  • the evaluation framework
  • evaluating implementation objectives
  • evaluating outcome objectives
  • procedures for managing and monitoring

Chapter 6 discusses how to get the information needed. This includes:

  • what specific information is needed to address objectives
  • what the best sources are
  • how data could be collected effectively and using what tools
  • how data collection can be monitored

Chapter 8 is about making sense of data, analysing data on both implementation and outcomes and using the results.

The final chapter provides information on reporting and disseminating findings for different stakeholders.

At the end of the document, a list of evaluation resources, other guidance documents, evaluation consultants, data collection methods, and other toolkits is provided.

Strengths

Strengths include:

  • guidance assumes no or very little prior knowledge of evaluation
  • provides practical advice on conducting real-world evaluations, rather than academically rigorous evaluations
  • provides different and useful information that is not always mentioned in other guides such as the makeup of your evaluation team, how to hire an evaluator and how to create a contract
  • simple to follow and does not provide too much in-depth information
  • generic so can be used for evaluation of any types of projects (not restricted to a specific type of project or condition)
  • considers how to manage external evaluators
  • a glossary of terms is included
  • the appendix includes a number of useful templates and frameworks to support evaluation – i.e. templates for evaluation plans
  • provides links to other resources
  • it divides evaluations by cost (low, moderate, high) and gives advice for each.
  • has a related document which gives detailed information on cost analysis

Limitations

Limitations include:

  • assumes evaluations will be simplistic
  • may not provide enough detail to support practitioners to conduct a rigorous evaluation of a program without seeking additional information (for example, only briefly describes the difference between qualitative and quantitative data)
  • US focused
  • children and family focused
  • some worked examples would have been helpful
  • many of the references are over 30 years old

UNAIDS: a framework for monitoring and evaluating HIV prevention programmes for most-at-risk populations

A framework for monitoring and evaluating HIV prevention programmes for most-at-risk populations aims to address the problem that existing monitoring tools are for more general outbreaks and epidemics rather than specifically for HIV. This guide focuses on improving results by taking readers through evaluation steps.

Review details

Date: English original 2008, reprint 2009

Length: 96

Lead author/organisation: UNAIDS

Themes

Pre-evaluation preparatory work: Needs assessment, Stakeholder involvement. Types of evaluation: Process evaluation, Outcome evaluation, Fidelity.

Purpose and utility of guidance.

To support planning and implementation programmes, monitoring and evaluation, and using data and information for policy development and programme improvement in AIDS work in poorer countries.

Primary target audience

For national and subnational programme managers and others involved in this work.

Contextual information

UNAIDS is the Joint United Nations programme on HIV and AIDS that leads the world in providing universal access to HIV and AIDS prevention and care.

Summary/overview

Existing monitoring and evaluation guides, particularly for prevention programmes, have been developed largely with generalised epidemics in mind. The requirements of monitoring and evaluation in HIV are often different because they need to focus on geographical areas where HIV is concentrated rather than throughout the country, and on the unique needs of the most at risk populations rather than the population as a whole.

There have been methods and approaches developed for monitoring and evaluation in these populations and much of this has been documented. However, they are not in the 1 place and need to be pulled together to provide a comprehensive review. The data collection for these groups has also been done largely in an ad hoc fashion and this document highlights the importance of subnational and project level monitoring and evaluation in order to target the most at-risk populations.

This document is the first step in this process and aims to lay out guiding principles, concepts and an organizing framework to help work in this area develop further. It also aims to bring the different methods, references and materials published separately, together into one document. It highlights the need for analytical skills for carrying out this work.

In order to investigate a problem one would start with:

  • defining what the problem is
  • exploring contributing factors
  • analysing what can be done about the prob (when this has been put into place)
  • checking to see if it is working
  • what the reach is
  • what affect it has

This document includes methods of population size estimations in order to support the identification of the problem and the situation in which it takes place and:

  • discusses assessing the contribution factors
  • presents monitoring and evaluation process
  • discusses methods to track programme uptake and coverage
  • describes how to assess whether an intervention is effective using outcome evaluation studies
  • discusses monitoring outcome and impact indicators and the role of surveillance and covers assessing collective effectiveness through triangulation methods
  • emphasises ethics
  • emphasises the need to involve stakeholders including the community and population under study
  • emphasises the need to be sensitive to cultural norms

It states that it is not meant to be a step by step guide to carrying out evaluation.

Strengths

Strengths include:

  • intended for use worldwide and includes worldwide partners
  • wide consultation during development
  • principles relevant to other health topics.
  • emphasises the need for appropriate research skills
  • illustrates theory with examples of work in the field.
  • overview of main issues associated with monitoring and evaluation.
  • clear about what it is and what it is not

Limitations

Limitations include:

  • specific focus on AIDS
  • an overview and therefore not intended to be used as a manual to support carrying out evaluation.
  • does not discuss theoretical issues in evaluation
  • does not include cost effectiveness, resource allocation or any reference to economic evaluation
  • would need to be adapted for use in different settings

WHO, guidelines: HIV

Additional comments

Will be tested in the field and further practical and operational guidance developed.

This operational guidance will include differences in approaches depending on context, country, group and so on. Includes the indicators for the most at-risk populations in the annexe.

W K Kellogg Foundation: evaluation handbook

The W K Kellogg Foundation evaluation handbook is written primarily for project directors who have direct responsibility for the ongoing evaluation of W K Kellogg Foundation-funded projects.

Review details

Date: 2004

Length: 120 pages

Lead author/organisation: WK Kellogg Foundation

Themes

Background to evaluation: Overview of evaluation. Pre-evaluation preparatory work: Budgeting, Contracting and communications, Logic modelling, Stakeholder involvement. Evaluation processes: Overview of evaluation processes, Defining questions, Research design and methods, Data collection, Data analysis and interpretation. Types of evaluation: Overview of types of evaluation, Process evaluation, Fidelity. Additional support: Hiring an evaluator.

Purpose and utility of guidance

The handbook provides a framework for thinking about evaluation as a relevant and useful program tool and to support those organisations which are the recipients of Kellogg Foundation. Its principles are that evaluation should be used to show the effectiveness of one method of intervention over another not just in terms of outcomes but in terms of understanding the processes that lead to these outcomes and looking at change for the individual. It should be supportive and responsive to projects, rather than become an end in itself.

Primary target audience

The handbook is written primarily for project directors who have direct responsibility for the ongoing evaluation of WK Kellogg Foundation-funded projects. However, it may also be used by other project staff who have evaluation responsibilities, for external evaluators, and for board members.

Contextual information

The WK Kellogg Foundation (WKKF), founded in 1930 as an independent, private foundation by breakfast cereal pioneer Will Keith Kellogg, is among the largest philanthropic foundations in the United States. Guided by the belief that all children should have an equal opportunity to thrive, WKKF works with communities to create conditions for vulnerable children so they can realize their full potential in school, work and life.

The guidance document starts with an overview of its underlying philosophy: critiquing the dominant ‘scientific method’ and suggesting that, in human service evaluation, many complicated factors are at play that should be considered.

The foundation suggests that evaluation should:

  • strengthen projects
  • use multiple approaches
  • address real issues (based on local circumstances and issues)
  • create a participatory process
  • allow for flexibility
  • build capacity

Summary/overview

The handbook is in 2 parts and serves as a framework for grant recipients to have both a shared vision for effective evaluation and a blueprint for designing and conducting evaluation.

Part 1: overview of evaluation including:

  • a summary of the most important characteristics of the Foundation’s evaluation approach
  • the contextual factors that have led to a focus on proving services work rather than improving them
  • an overview of the Foundation’s 3 levels of evaluation (project, cluster and program/policy making) with a particular focus on project-level

It sets out the negative consequences of ineffective evaluation:

  • a belief that there is only one way to do evaluating
  • not examining equally important questions (not just that it works but for whom and in what circumstances
  • the difficulty of evaluating complex systems
  • losing sight that evaluation is political and value-laden

It emphasises that if evaluation is ineffective then important projects will not get the money they need as they will not convince of their worth and that knowledge will be lost.

Part 2: provides a description of the 3 components of project-level evaluation: the context, the implementation and the outcome and also provides logic model examples to show how these link. It then takes users through each of the steps to carry out an evaluation:

  • preparing
  • identifying stakeholders
  • establishing a team
  • developing the questions
  • budget
  • selecting an evaluator
  • designing and conducting evaluation
  • choosing data collection methods
  • collecting data
  • analysing and interpreting
  • communicating findings and using the results

Strengths

Strengths include:

  • really good insight into the philosophy and history of evaluation and how this influences how it is carried out which is not often in evaluation guides
  • discusses the importance of evaluation for using resources most efficiently
  • committed to improving programs rather than proving that a program works or doesn’t work
  • promotes community/stakeholder involvement
  • discusses epistemological underpinnings
  • differentiates between types of evaluation (context, implementation, and outcome)
  • provides detailed consideration of the 9 steps that are considered to be crucial for evaluation.
  • uses real case studies from the foundation to demonstrate issues
  • includes cost of evaluation.
  • links project evaluation to project outcome and shows how if evaluation is built in at the beginning it will improve the project
  • emphasises the policy context and the importance of placing evaluation in this
  • encourages people to influence policy from evaluation results
  • emphasises the importance of improvement for the individual rather than just the project and alongside this, the need to include different stakeholders so that one perspective is not favoured
  • emphasises how projects can build individual’s capacity (both staff and clients) and that this should be reflected in the evaluation
  • discusses the concept of perspective and how an improvement from a government’s perspective would be different from that of an individual’s perspective and how to factor this into indicators and outcomes
  • states that evaluations should be reflexive, flexible and participatory
  • shows that evaluation is not value free
  • provides worksheets to support programme evaluation

Limitations

Limitations include:

  • may provide too much detail for someone looking for a general overview (for example, epistemological underpinnings)
  • focuses largely on complex evaluations of community projects (as opposed to relatively simple impact evaluations)
  • assumes a large budget/multiple resources will be available
  • US focused

Additional comments

Kellogg also feels that this can be used by smaller projects too.

World Health Organisation: making choices in health - guide to cost-effectiveness analysis

The WHO guide to cost-effectiveness analysis aims to show how to assess whether the current mixture of interventions is efficient, how it compares to a proposed measure and how to maximise the generalisability across settings.

Review details

Date: 2003

Length: 329

Lead author/organisation: World Health Organisation

Themes

Background to evaluation: Overview of evaluation. Pre-evaluation preparatory work: Budgeting, Needs assessment, Evaluation planning, Logic modelling, Stakeholder involvement. Evaluation processes: Defining questions, Choosing outcomes, Describing the intervention, Research design and methods, Data collection, Data analysis and interpretation. Additional support: Tools and toolkits.

Purpose and utility of guidance

For researchers and policymakers to compare the cost-effectiveness of new interventions compared to that of current interventions. Also, to support them in generalising this to other interventions in order to reduce the costs of such cost-effectiveness analyses. Intended to be complementary to existing guidelines on cost-effectiveness analysis

Primary target audience

The intended audience is policy makers and researchers working in this area (there is an assumption that these people have some knowledge of economics).

Contextual information

WHO is the authority for health within the United Nations system. It is responsible for providing leadership on global health matters, shaping the health research agenda, setting norms and standards, articulating evidence-based policy options, providing technical support to countries and monitoring and assessing health trends.

Summary/overview

The problem of traditional (incremental) Cost-effectiveness analysis (CEA) is that it compares new interventions to current interventions but does not address whether the current mix of interventions are themselves an efficient use of resources. It does not address the assumption that the additional resources required would need to come from another sector.

In addition, it is expensive to carry out cost-effective analyses when there are a large number of interventions, which means that poorer countries are often not able to do this. Cost-effectiveness analyses should also consider the whole system, that is where other costs are incurred by society, how it affects different social groups, ethical issues and effects on future generations but this is rarely done.

Therefore, this guide aims to show how to assess whether the current mixture of interventions is efficient, how it compares to a proposed measure and how to maximise the generalizability across settings. In the annexe it also reproduces papers published elsewhere which provide background to the technical issues discussed. The chapters are:

  1. What is Generalized Cost-Effectiveness Analysis?
  2. Overall study design
  3. Estimating costs
  4. Estimating health effects
  5. Discounting
  6. Uncertainty in cost-effectiveness analysis
  7. Policy uses of Generalized CEA
  8. Reporting CEA results
  9. Recommendations

The second part of the guide includes papers published elsewhere which consider these issues and the annexe includes papers where these methods have been used in practice and the ethical issues in this area.

Strengths

Strengths include:

  • comprehensive guide on cost-effective analyses which supports policymakers in making choices of health interventions
  • supports assessment of efficiency of current system and recommends that costs are considered against ‘the null’ rather than the current system
  • shows how to generalise findings which reduces resources spent on this analyses and allows poorer counties to use findings in their own settings
  • looks at the wider health and societal system and considers issues of equality, ethics, discounting, external factors for health and the effects on future generations
  • includes background papers which discusses aspects of this analysis in more detail
  • clear useful recommendations which can stand alone

World Health Organisation: evaluation practice handbook

The WHO evaluation practice handbook is in 3 parts.

Review details

Date: 2013

Length: 161

Lead author/organisation: World Health Organisation

Themes

Pre-evaluation preparatory work: Contracting and communications, Ethics. Types of evaluation: Economic evaluation.

Purpose and utility of guidance

Offers comprehensive information and practical guidance on how to prepare for and conduct evaluations in WHO, and guidance on using and following up the results and recommendations. Specifically aimed at those working with and for WHO but could be used by anyone involved in evaluation.

Primary target audience

All WHO staff and partner organisations that plan, manage, conduct or are involved in WHO programmes and evaluations.

It also targets networks such as WHO’s senior management, and the Global Network Evaluation (GNE) who should disseminate this and promote evaluation throughout the organisation.

Contextual information

WHO is the authority for health within the United Nations system. It is responsible for:

  • providing leadership on global health matters
  • shaping the health research agenda
  • setting norms and standards
  • articulating evidence-based policy options
  • providing technical support to countries
  • monitoring and assessing health trends

This evaluation guide complements the World Health Organisation’s evaluation policy (set out in the annexe of this guide). It aims to support streamlining of evaluation and provide step by step support to evaluation in WHO. It is a working tool which will be adapted to support evolving practice. In addition, there are plans for an e-learning guide to accompany this.

Summary/overview

This handbook is in 3 parts.

The first covers the definition, objectives, principles and management of evaluation in WHO.

The second provides practical guidance on preparing for conducting an evaluation in compliance with WHO’s evaluation policy. It goes through this in a step by step process: from planning to carrying out to reporting and disseminating findings and ensuring these contribute towards ongoing improvement.

The third part includes the annexes which provide further information on the guide and support for evaluation. It also provides operational guidance and templates.

Strengths

Strengths include:

  • really comprehensive and detailed evaluation guide, including information on what evaluation is as well as detailed support on how to do it
  • useful for evaluation in any context
  • appendices give helpful additional information including roles and responsibilities for evaluation and other types of assessment that are not evaluation
  • aim to develop online tool to support this
  • as part of dissemination of findings it emphasises capacity building approaches post evaluation.
  • emphasises evaluation as part of ongoing improvement and relates it to future planning and commitment to action

Limitations

Limitations include:

  • it would have been useful to have seen worked examples of WHO evaluations in practice using this guide, or even concrete examples of where different sections could be used in specific WHO programmes
  • support from experienced evaluators would be helpful alongside this document
  • it does not include evaluation theory
  • the information on measuring impact is limited

WHO evaluation policy, 2012

Additional comments

There are many examples of WHO evaluations on the internet which can complement this.

The learning tool designed to accompany this does not seem to be available as yet but it is worth looking out for it.