Effectiveness
Best practice for assessing the effectiveness of a screening programme
A screening programme should only be recommended and implemented if evidence shows that the planned screening pathway, including further tests and treatment, will do more good than harm at reasonable cost. Based on this evidence, it is expected that the screening programme will be effective in achieving its objectives.
The health benefits, harms and costs of a screening programme are not constants, and may change over time. In addition to periodic reviews of the available evidence, it is good practice to re-evaluate the screening programme at planned intervals to assess the extent to which it has remained effective in achieving its objectives following implementation.
Each screening programme has a primary aim or objective that it sets out to achieve. This varies according to the condition being screened for. For some screening programmes the aim will be to provide the screened cohort with more information to make informed decisions, while for others it will be to reduce the transmission of a disease or the morbidity and/or mortality of the condition.
Effectiveness can be defined as a measure of how successful the screening programme is at achieving its objective.
Each screening programme must balance the benefits, costs and harms of screening for a condition. The factors which influence the benefits, costs and harms may change over time, impacting the programme’s effectiveness.
New tests, therapies or vaccines are regularly developed and introduced. These may lessen the need for a screening programme for a particular condition, or reduce the effectiveness of screening for a subsection of the cohort – for example by impacting on a certain age group. The prevalence of a condition may also change over time due to healthcare advances, population or lifestyle changes and other factors. This can reduce the potential benefits of screening and result in a programme no longer being cost-effective.
Over time, changes may be needed to maintain the screening programme’s effectiveness, for example redefining the cohort, introducing a new test, or updating the screening pathway. If this requirement is not recognised, a screening programme which was effective when implemented may become ineffective in later years, resulting in a failure to achieve its objectives and increased harms to the target population.
A process should be in place for periodically assessing a screening programme’s effectiveness – to identify areas for change or improvement and make sure the screening pathway and processes remain valid and effective.
An evaluation of effectiveness is likely to require the review of a range of data. This will vary according to the condition being screened for, and the screening pathway provided. Common to most programmes will be a set of screening standards which quality assure the entire screening pathway and support continuous improvement. Performance data linked to these standards is an important source of information about the effectiveness of the programme and should be closely examined.
Programme standards data should not be considered in isolation. A screening programme which consistently achieves its performance targets while morbidity and mortality due to the condition increases may be considered ineffective. Conversely, if a reduction in morbidity and mortality is seen alongside a poorly performing screening programme, there may be other reasons for the improvement in disease outcomes. Performance data should be viewed as part of a wider landscape, which includes disease prevalence, treatment outcomes and other relevant information.
It is also important to consider the acceptability and equity of the screening programme. The number of people accepting the invitation to be screened can change over time, and this change may be more significant in certain sections of the screened population. Examining the experience and behaviour of the screening cohort across the entire screening pathway may reveal areas of ineffectiveness and inequality among specific groups.
Useful factors to consider as part of an effectiveness review may include:
- coverage of the screening test
- uptake of the screening test
- number and proportion of people who screen positive from the screening test
- rate of overdiagnosis
- number of false positive and false negative results
- referral time to specialist services or diagnosis services
- referral time to treatment
- treatment outcomes
- mortality due to the condition
- morbidity due to the condition
- prevalence of the condition
- early diagnosis / burden of disease
- cost-effectiveness of screening
- health inequalities
- patient experience and effect of screening on quality of life
- IT provision and data quality
- workforce competency
- comparison with other regions or countries (both those which screen for the condition and those which do not)
- improvements in treatment and/or prevention of the condition
- prevalence of risk factors for the condition
A comprehensive effectiveness review should aim to consider data from a range of sources. In addition to screening standards performance data, the screening programme may hold other useful data, either locally or nationally, in the form of audits, service evaluations and incident reports.
External data sources may include:
- hospital admissions data
- diagnostic data
- outcomes data from treatment centres
- national mortality statistics
- national birth statistics (for antenatal and newborn screening programmes)
- qualitative data on patient experience from surveys or published research
- cost-effectiveness data from published research and reports
- World Health Organisation (WHO) international data
- world region health organisation data
An effectiveness review should aim to examine available data to make an assessment of the effectiveness of the screening programme over a set period of time. This may be the period since implementation, a defined period (for example 5 or 10 years), or the period for which data is available. Time periods of less than 3 years are not generally recommended as it may be difficult to draw firm conclusions over a shorter period of time. If possible, data from before the screening programme was implemented should also be analysed to assess the impact of the introduction of the programme.
The effectiveness review should be distinct from a performance review or quality inspection. Its purpose is not to hold the screening programme to account for its performance against set targets. Its purpose is to conduct an open and transparent review which evaluates the impact of the screening programme in terms of the condition being screened for, and the extent to which it achieves its primary objective.
The process should be inclusive, drawing on the expertise and experience of a range of stakeholders. It is recommended that an effectiveness review advisory group should be established and maintained for the lifetime of the project. Depending on the agreed scope of the review, it may take anything from a few weeks to a year or more to complete the review process. Ongoing support of stakeholders is important for the success of the project.
It is best practice to draw on a broad range of expertise when conducting an effectiveness review. There are many stakeholders that can provide context for the screening programme or the condition that will be vital for undertaking the review. An advisory group brings together a breadth of experience, expertise, knowledge and perspective to provide a forum for discussion to inform and guide the effectiveness review. Including stakeholders who are directly responsible for data collection and analysis for the screening programme from the beginning will enable data requests to better reflect the available data and information. Other stakeholders within an advisory group may include:
- operational screening staff
- clinicians
- members of national organisations or associations
- public health professionals
- academics or researchers
- quality assurance staff
The group should:
- provide expertise and advice on how best to measure effectiveness for the screening programme in question
- agree the scope of the review
- help identify relevant data sources
- support data collection
- provide ongoing feedback on progress
- provide expert input into the review’s final report
The effectiveness review team should include data analysts who collect, collate and analyse the data that ultimately informs the findings of the review.
Once the data metrics needed to assess effectiveness have been determined and agreed (with the effectiveness review advisory group, if applicable), data collection can begin.
If the team conducting the effectiveness review holds the data, then it can be extracted immediately. If the data is held by other organisations, then formal data requests to access the data will probably be needed. There may be specific data request documents that need to be completed and submitted to obtain data from external organisations, and the provision of data templates may be useful. The time taken for external organisations to fulfil data requests can vary, so this should be factored into project planning.
Once the data has been collected, checked, cleaned and formatted, presenting it to the expert advisory group for discussion can be a valuable step. It provides the opportunity for feedback, and for experts to explain certain trends or gaps in data. The format the data is presented in can be changed accordingly and any insights generated can be captured in the written report alongside the data.
The goal of the effectiveness review should be the production of a final written report which describes the process of the review and details its findings and conclusions. Depending on the agreed scope of the review, the report may include recommendations to address areas of sub-optimal effectiveness, or to suggest areas where further evaluation may be required. An effectiveness review should not be viewed as research. Significant changes to the screening programme should not be implemented purely based on an effectiveness review. The findings of the review may, however, prompt further research that provides an evidence base for future programme changes.
To aid transparency, it is recommended that the effectiveness review report is published and made available to stakeholders and members of the public.
The following is an example of a published screening programme effectiveness review: AAA screening programmes in the UK: 10-year effectiveness review