Guidance

Glossary

Published 7 August 2018

This guidance was withdrawn on

For up-to-date guidance see Evaluation in health and wellbeing.

A

Action research

Action research is research about a social system that attempts to change that system while simultaneously studying it. It uses a cyclic or spiral process, which alternates between action and critical reflection and in the later cycles, continuously refining methods, data and interpretation in the light of the understanding developed in the earlier cycles.

Accountability

Responsibility for actions and justifying use of resources satisfactorily to relevant parties. Generally the public or funding bodies.

Adaptations

Alterations made to an intervention in order to fit a different context.

Anecdotal evidence

This is when stories about individuals or personal experience are used as evidence rather than more rigorous research evidence.

Appreciative enquiry

This is a change management approach. It focuses on what is working well, exploring why, and doing more of it. Its premise is that an organisation becomes good at what it focuses on. If it focuses on problems, it will become good at solving problems. If it wants to build strength, it should focus on strengths.

Attribution

When there is a causal link between an intervention and an outcome.

B

Baseline data

Data that are collected at the beginning of an evaluation before an intervention has been implemented. These are compared with data collected at the end to determine if an intervention has had an effect.

Bias

A systematic error or deviation in results or inferences from the truth.

C

Case control study

A study that compares people with a specific disease or outcome of interest (cases) to people from the same population without that disease or outcome (controls), and which seeks to find associations between the outcome and prior exposure to particular risk factors. This design is particularly useful where the outcome is rare and past exposure can be reliably measured. Case control studies are usually retrospective, but not always.

Categorical data

Data that are classified into 2 or more non-overlapping categories. Race and type of drug (for example, aspirin, paracetamol) are categorical variables. If there is a natural order to the categories, for example, non-smokers, ex-smokers, light smokers and heavy smokers, the data are known as ordinal data. If there are only 2 categories, the data are dichotomous data.

Causation/causality

This is the relationship between one event and another that can be demonstrated to be due to cause and effect. Evaluations generally aim to measure whether outcomes are caused by interventions rather than being due to another external cause.

Cluster evaluation

Looks across a group of projects to identify common threads and themes. It seeks to determine impact through aggregating outcomes from multiple sites or projects, whereas multisite evaluation seeks to determine outcomes through aggregating indicators from multiple sites.

Cohort study

An observational study in which a defined group of people (the cohort) is followed over time. The outcomes of people in subsets of this cohort are compared. To examine people who were exposed or not exposed (or exposed at different levels) to a particular intervention or other factor of interest.

Comparison group

The comparison group are chosen in research studies to be similar to an experimental (also called intervention) group in major variables except for the variable being tested (for example, the intervention). Comparison groups are commonly used in quasi-experimental designs.

Complex intervention

An intervention comprising of multiple components which interact to produce change. The complexity may also relate to the number of organisational levels targeted or the range of possible outcomes.

Contribution analysis

This approach aims to assess which of the factors in a programme contribute towards the outcome. It assesses causal questions and infers causality in real-life programme evaluations. It is designed to help managers, researchers, and policy makers arrive at conclusions about the contribution their programme has made (or is currently making) to particular outcomes.

Control group

A control group does not receive the services, products or activities of the intervention being evaluated to compare the resulting differences. Often, the control group is matched on relevant factors such as age, gender or health status. So that any difference observed between the groups can be more confidently attributed to the intervention.

Cost benefit analysis

An economic analysis that converts effects into the same monetary terms as costs and compares them. The aim of this analysis is to help decision making, by giving a monetary value to the costs and benefits associated with the decision.

Cost effectiveness analysis

Analyses that compare the costs of alternative ways of producing the same or similar outputs. It aims to assess how to achieve results in the most effective and efficient way.

Cost minimisation analysis

A type of economic evaluation used when the 2 interventions or services have the same benefit. So they are compared in terms of their resource use or costs.

Cost utility analysis

A particular type of cost effectiveness analysis, in which the benefits are measured in terms of quantity and quality of life.

Cross over trial

A type of clinical trial comparing 2 or more interventions in which the participants, upon completion of the course of one treatment, are switched to another. For example, for a comparison of treatments A and B, the participants are randomly allocated to receive them in either the order A, B or the order B, A. Particularly appropriate for study of treatment options for relatively stable health problems. The time during which the first intervention is taken is known as the first period. With the second intervention being taken during the second period.

D

Democratic evaluation

Democratic evaluation is an approach where the aim of the evaluation is to serve the whole community. It focuses on including people, sharing information, and encourages participation and collaboration. It helps to improve public accountability and transparency.

Dependent variable

The outcome or response that results from changes to an independent variable.

Discounting

Discounting is a method which adjusts for how people value outcomes which will occur in the future less than those which will happen sooner. It also takes into consideration the opportunity costs of doing something now.

Dissemination

Sharing results with appropriate people.

E

Economics

The branch of knowledge concerned with supply and demand or the production, consumption, and transfer of commodities. Supports the allocation of scarce resources that could have alternative uses.

Economic evaluation

In public health, this aims to help decision makers allocate healthcare resources, set priorities and shape health policy. So that the best outcomes can be achieved for the least amount of resources.

Effectiveness

The extent to which a specific intervention, when used under ordinary circumstances, does what it is intended to do.

Efficiency

Efficiency is the capacity of the intervention to generate change under ideal or controlled circumstances. It is an assessment of how well resources are used to achieve a desired outcome.

Ethics

Ethics deals with moral principles and the protection of participants. An important feature of ethics in evaluation is that the participant should not be harmed by their involvement.

Evaluability

Extent to which an intervention or project can be evaluated in a reliable and credible fashion.

Evaluation plan

An evaluation plan is a written document that describes how you will manage and evaluate the intervention, and how the results will be used to improve the intervention and decision making.

Evidence base

The research that is available to support and/or direct an approach.

External evaluation

Evaluation carried out by people who are not involved in the project or programme in question. This can be considered to be more objective but can also be less successful in engaging participants in any resultant change.

External validity

This is the extent to which the findings of a study can be generalised to other situations and to other people.

F

Fidelity of delivery

Fidelity refers to whether an intervention was delivered as planned. The effectiveness of an intervention may be reduced or eliminated if it is not delivered according to the plan or protocol based on the logic model.

Formative evaluation

Evaluation that takes place before or during a project’s implementation or when it is forming. With the aim of improving the project’s design and performance as it develops (see Summative evaluation).

G

Generalisability (external validity)

This determines when and to what extent results gained in one setting or for one population can be used in another and, if not, what adaptations might have to be made to replicate the results.

H

Hawthorne effect

The Hawthorne effect (also referred to as the observer effect) is when individuals modify an aspect of their behaviour in response to their awareness of being observed.

Health impact assessment (HIA)

This brings together scientific data, public health expertise, and stakeholder input to identify the potential health effects of a proposed policy, plan, or project. An HIA offers practical recommendations for ways to reduce risks and increase opportunities to improve health.

Horizontal evaluation

This is when evaluation is carried out by participants and their peers. Its aim is to neutralise power relationships and create better learning and improvement by having the evaluation embedded in the work.

I

Impact

A result or effect that is caused by a project, programme, service or intervention. These are generally effects that occur in the medium to long term and can be intended or unintended and positive or negative.

Impact evaluation (see ‘Process evaluation’)

Impact evaluations focus upon the positive and negative effects of an intervention. They explore whether the intervention has achieved its stated objectives as well as whether there were any unintended consequences. These unintended consequences can be both negative and positive.

Impartiality

Impartiality (also called even handedness or fair-mindedness) aims to reduce bias. This is done by taking into consideration the views of different stakeholders and reporting on differences in perspective when these arise. Impartiality is a principle holding that decisions should be based on objective criteria, rather than on the basis of bias, prejudice, or preferring the benefit to one person over another for improper reasons.

Implementing/Implementation

This is when the findings of the evaluation are put into practice to improve the service.

Incremental Cost Effective Ratio (ICER)

The ICER is usually expressed as the small positive or negative change in cost to gain an extra quality-adjusted life year (QALY). This approach allows for easy comparison across different types of health outcomes. The use of incremental cost utility ratios enables the cost of achieving a health benefit by treatment with a drug to be assessed against similar ratios calculated for other health interventions (for example, surgery or screening by mammography).

Indicators

These are proxy measures of outcomes. For example, a policy which aims to reduce health inequalities may choose as an indicator the number of people on benefits in an area. After the policy is implemented to reduce inequality, this would be expected to decrease.

Independent variable

An exposure, risk factor, or other characteristic that is hypothesized to influence the dependent variable.

Inputs

In evaluation terms, this is the resources required to achieve the policy objectives.

Internal evaluation

This is when people who are working on a project are responsible for carrying out the evaluation rather than contractors.

Intervention group

A group of participants in a study receiving a particular health care intervention.

Intervention mapping

Intervention mapping (IM) is a framework for designing and evaluating interventions; especially those used in health promotion. IM consists of 6 steps.

  1. A needs assessment of the problem. This establishes, for whom and in what context is change is needed and what benefits will follow. This allows specification of the intervention goals and the outcomes that matter to effectiveness. It is likely that collaboration with services and receivers of services will optimise this planning stage.

  2. Defining the logic model involves linking intervention outcomes to processes that generation of those outcomes. The logic model identifies the mechanisms of change by which the intervention is thought to work and may draw on a variety of theories.

  3. Intervention design involves selecting change techniques that have been shown to have the capacity to change the processes identified in the logic model. The intervention design also involves selecting the delivery mode for these techniques, that is, how change techniques will be embedded in the intervention materials and operation.

  4. Once the intervention is designed, the necessary materials need to be produced, pilot tested and refined.

  5. The intervention needs to be implemented and this needs to be carefully planning with those who will deliver and receive the intervention.

  6. Running throughout this planning process is evaluation. So as intervention design proceeds through these stages, it sets the parameters for evaluation. For example, setting the intervention goals includes defining the outcomes measured in an outcome evaluation. Developing a logic model highlights the generative processes that a process evaluation will assess. So the evaluation plan evolves as designers work through the IM process.

L

Logic model

A logic model is a systematic and visual way to present the sequence of related events which connect the changes that will be made to the desired outcomes. It illustrates the mechanisms of change that the effectiveness of the intervention will depend on.

M

Market failure

This is when the market fails to provide goods or services in an efficient way. This is often the reason for governments to intervene. For example, street lighting has to be provided by the government because the market would not provide this.

Meta-analysis

The use of statistical techniques in a systematic review to quantitatively integrate the results of a series of primary studies. Meta analyses allow evaluations to be compared in terms of standardised effect sizes that are observed for interventions, so identifying the most and least effective interventions.

Mixed methods

A study that uses more than one type of research method. The methods may be a mix or qualitative and quantitative methods, a mix of quantitative methods or a mix of qualitative methods.

Monitoring

Routine and systematic collection of information and checking against a plan. The information might be about activities, products or services, or about outside factors affecting the organisation or project.

N

Needs assessment

This is carried out to identify priorities, make improvements to an organisation or identify gaps in service provision in order to allocate resources.

Non-randomised study

Any quantitative study estimating the effectiveness of an intervention that does not use randomisation to allocate individuals to conditions.

O

Objectives

The goals of the intervention or policy set out at the beginning of the evaluation.

Observational study

A study in which the investigators do not intervene, but observe the course of events. Changes or differences in specified characteristics (for example, whether or not people received the intervention of interest) are studied in relation to changes or differences in other characteristic(s) (for example, whether or not they died).

Opportunity costs

This is a major part of economics. It considers not just the costs of doing something but also the other opportunities that have to be given up as a result. For example, if I buy a car and then cannot go on holiday for 3 years, the opportunity cost of the purchase of a car is the lack of holidays.

Optimism bias

Planners and appraisers from both the public and private sector tend to overstate benefits and underestimate timing and costs. Optimism bias attempts to redress this by increasing estimates of timing and cost and understating benefits.

Option appraisal

An options appraisal compares the implications of doing nothing against the different options for doing something. It includes the advantages, disadvantages and costs of each option and makes a recommendation based on this.

Outcome evaluation

Outcome, or effectiveness, evaluation focus on the endpoint of something. In other words, it is interested in programme effects by assessing the progress in the outcomes or outcome objectives that the programme was designed to achieve: such as improvement in function, recovery or survival of a patient after an operation, rather than the structure or process that leads to it.

Outcome indicators

Outcome indicators are a specific, observable and measurable characteristic or change that will represent achievement of the outcome.

Outputs

Outputs are the accomplishment or product of the activity. For example, number of workshops actually delivered, number of individuals who heard the media message, among others. They relate to ‘what we do and who we reach’ whereas outcomes refer to ‘what difference there is’.

P

Participatory evaluation

Participatory evaluation actively engages stakeholders at all stages of the evaluation from defining the questions to putting the results into action. This method is often used in community projects and in developing countries. Though it is useful for all kinds of evaluation as it encourages more effective stakeholder engagement. It is concerned with:

  • creating a more equal process, where programme participant have an equal role to other perspectives such as funders and evaluators
  • ensuring that the evaluation process and results are relevant to the community
  • ensuring result ownership so that any necessary changes will occur

Pilot

This technique is used to test materials or services, on a smaller sample or scale, to see how it works and where potential problems lie before putting it into practice fully.

Process evaluation

Process evaluations aim to provide the more detailed understanding needed to inform policy and practice. This is achieved through examining: implementation (the structures, resources and processes through which delivery is achieved, and the quantity and quality of what is delivered); mechanisms of impact (how intervention activities, and participants’ interactions with them, generate change – see logic model); and context (how external factors influence the delivery and functioning of interventions).

Q

Quality-adjusted life years (QALYs)

QALYs are used to assess value for money in economic evaluations of health-related interventions. These measure health as a combination of the duration of life and the health-related quality of life. So ‘1 QUAY’ would represent a year of perfect health while ‘0’ would represent death. A negative number would represent living with a quality of life judged to be worse than death. Using QALYs researchers can specify how much per QULY an intervention costs. This allows different interventions to be compared in price per QALY.

Qualitative research methods

Qualitative research aims to understand or explore elements of social life. Methods (in general) generate words, rather than numbers, as data for analysis.

Quantitative research methods

These research methods involve gathering data in numeric form. Methods include questionnaires, surveys and administrative data. Data are summarised and analysed using statistical techniques (for example, means, multiple regressions, ANOVA - Analysis of variance).

R

Randomised control trials (RCT)

RCTs are used to see whether an intervention has an effect. This is done by allocating people randomly into 2 groups, one that has an intervention and one that doesn’t. The results are compared to see what the effects of the intervention are.

Rapid-cycle evaluation

Rapid-cycle evaluation uses a rigorous, scientific approach to provide decision makers with timely evidence which can be acted upon immediately so that changes can improve programme outcomes. Often, changes can be tested in a few months, and decision makers can have a high degree of confidence in the results. Rapid-cycle evaluation can also help avoid investments in changes that are unlikely to produce the desired results.

Reach

The extent to which a target audience comes into contact with the intervention.

Realist evaluation

Rather than carrying out an evaluation and assessing whether an intervention or policy works or doesn’t work, realist evaluation asks what works, for whom and under what circumstances. In this way it gives insight into what is appropriate for a particular group, context or setting. It also builds on previous evaluations to give a fuller picture of when something works.

Reliability

This is a judgement about if the research or evaluation was repeated the results would be the same.

Resources

These are time, money, staff or other materials.

Risk assessment/analysis

It is an analysis or an assessment of factors that affect, or are likely to affect, the successful achievement of an intervention’s objectives. As a result, you would then determine the risks and what you would do to mitigate these risks.

S

Stakeholder

Stakeholders are all of the people that have a direct or indirect interest in a project, programme, or policy and any related evaluation. This could include the community, groups and staff as well as budget holders and government.

Stakeholder engagement

Involving stakeholders in each stage of an evaluation to improve its effectiveness and likely implementation.

Statistical power

Statistical power is the likelihood that a study will be able to detect any effect present. In order to accurately analyse whether there are differences between 2 groups who are being compared, the sample size needs to be big enough. Power calculations allow us to assess what sample size is necessary to show an effect.

Summative evaluation

This is the evaluation of an intervention or programme in its later stages or after it has been completed to (a) assess its impact, (b) identify the factors that affected its performance, (c) assess the sustainability of its results, and (d) draw lessons that may inform other interventions.

Systematic review

Systematic reviews seek to collate all evidence that fits pre-specified eligibility criteria in order to address a specific research question. They aim to minimise bias by using explicit, systematic methods to collate literature; methods that can be replicated by other researchers.

T

Theory of change

A ‘theory of change’ or theory of mechanism is a comprehensive description of how and why change is expected to happen within a particular context. A logic model represents the theory of change underpinning intervention design.

Theory-based evaluation

Theory-based evaluation approaches involve understanding, systematically testing and refining the assumed connection (for example, the theory) between an intervention and the anticipated impacts.

Triangulation

Collecting data from multiple sources to establish validity.

U

Unintended consequences

Outcomes of an intervention which were not part of the aims or objectives. These can be positive or negative.

Acknowledgements

This work was partially funded by the UK National Institute for Health Research (NIHR) School for Public Health Research, the NIHR Collaboration for Leadership in Applied Health Research and Care of the South West Peninsula (PenCLAHRC) and by Public Health England. However, the views expressed are those of the authors.

Written by Margaret Callaghan, Krystal Warmoth, Sarah Denford and Charles Abraham. Psychology Applied to Health, University of Exeter Medical School.