Independent report

Chapter 5: modelling

Updated 10 January 2023

What epidemiological modelling was used for in this pandemic

Overview

Epidemiological modelling has been an important tool throughout the pandemic to interpret data to support understanding the situation, and to provide scenarios to develop awareness of the potential impacts of different options for policy choices.

At the outset of the pandemic data were limited, and modelling pulled together sparse, messy evidence to consider what impact COVID-19 might have when it reached the UK. Modelling has been able to support wide-ranging policy decisions in the past years, from influencing the development of the roadmap out of lockdown in spring 2021 through to supporting individual government departments with their own strategic (and operational) responses.

Over the course of the pandemic, modelling evolved, moving from a handful of models providing individual estimates that were subject to challenge from other experts to a concerted consensus effort across an entire community, including devising new methodologies for statistically combined estimates of key parameters and projections. This move to a combined consensus estimate using a range of models was an important progression; consensus positions offered greater confidence than individual models could.

Understanding of what modelling can and cannot provide (and communication of this) and what principles and insights can be concluded has also developed markedly, as have the logistics of managing such analyses.

How epidemiological modelling has been used in the COVID-19 pandemic

Early in the pandemic when little was known about COVID-19 in the UK as a disease, modelling relied on working from first principles to estimate the severity and transmissibility of the virus using initial data, including from China, and providing high-level insights such as the extent to which reducing peoples’ contacts could break chains of transmission and thus delay the spread of a UK epidemic, and that earlier intervention is more effective than later intervention.[footnote 1], [footnote 2], [footnote 3]

As more was understood about COVID-19, models could become tailored to reflect COVID-19’s particular characteristics and modelling assumptions were updated accordingly and continuously.

Throughout COVID-19, a wide range of such modelling techniques have been used. These include but are not limited to the following:

1. Supporting the interpretation of limited, unclear, and sparse data to give early estimates of key parameters, such as the basic reproduction number (R0), and understand how an infectious agent is moving through a population

Epidemiological modelling was able to use data, for example, from the Diamond Princess cruise ship in February 2020, to estimate infection hospitalisation, infection fatality, and hospitalisation fatality ratios.[footnote 4], [footnote 5], [footnote 6], [footnote 7], [footnote 8] These were then applied to a UK context to infer what impact COVID-19 might have here.

As new variants of SARS-CoV-2 have emerged, both in the UK and abroad, it has also been possible to use modelling to understand how infection levels might translate into future hospital admissions and deaths.[footnote 9], [footnote 10] As individuals have been vaccinated and repeatedly exposed to the virus, severity estimates in modelling have also been updated to reflect the changing understanding of COVID-19 at a particular time.

There are, however, limitations to such extrapolation. For example, as the Omicron variant emerged in South Africa in November 2021, it was impossible to tell whether its early apparent decreased severity would be replicated in the UK. South Africa is very different from the UK, both at the time in epidemiological terms (COVID-19 epidemic timing and variant composition to date have been different, vaccination types and programmes have been different, as well as other factors) but also demographically, with quite different population structures.[footnote 11]

2. A method to combine multiple parameters, such as the rate of transmission or contact rates among the population, into individual metrics that can be used to monitor the ongoing situation, such as estimating the effective reproduction number (R), growth rate, or incidence

Early in the pandemic, groups estimated such nowcasts in an informal manner, and their agreed consensus position was reported through the Scientific Pandemic Influenza Group on Modelling Operations (SPI-M-O) consensus statement for that week. From around May 2020, SPI-M-O began to combine these nowcasts using a statistical approach across a minimum of 3, but often more than 10, models to provide a consensus range.

Over time, these sorts of estimates expanded to different nations of the UK and geographical regions of England. These were produced weekly from 29 May 2020 until 1 April 2022 (with transfer of ownership from SPI-M-O to the UK Health Security Agency (UKHSA) on 23 July 2021).[footnote 12]

These nowcasts have their own specific limitations; they are average measures that cover different geographies, variants of the virus, different groups or settings, and so on, that make it more difficult to interpret than, say, case rates or hospitalisation data. They are also lagged indicators that reflect transmission from 2 to 3 weeks earlier. Once a methodology was agreed by SPI-M-O, the process of producing such metrics became simpler. However, such methodologies need constant review as the situation and requirements for monitoring change. For example, all models have been weighted equally when estimating R, but other methods of statistical combination might be more appropriate in the future.

3. Providing a structured way to test and challenge assumptions about, for example, the properties of the pathogen or the disease itself, how population mixing affects transmission, or how infections translate into the need for healthcare

These results and changes to assumptions were then used to update and improve representativeness of models over time, as knowledge about the disease increased. For example, early in the pandemic, the frameworks of some models for COVID-19 were initially adapted from previous influenza models – these were significantly changed and adapted as the various early inherent unknowns about SARS-CoV-2 became clear, and continue to be so as more and more is understood.

4. Using models to provide insight into what future epidemic patterns might look like

This allowed a potential infinite range to be narrowed to support policymakers understand the “decision space” they were within. Modelled trajectories showed which variables were critical, how uncertainty could be resolved, over what time period, and with which data. Over the COVID-19 pandemic, different methods for such trajectories have included the following:

  1. Projections were used to extrapolate trends into the short to medium term (a few weeks) to show how current rates of growth or decay would change trajectories of key metrics such as hospital admissions and deaths, assuming no policy or behavioural changes affected the trends observed at the time. These considered the inherent delays between infection, developing symptoms and requiring healthcare, and extrapolated one or two generations of transmission. They were sensitive to initial growth rates and differences in data streams, and so using a statistical combination of different models during the pandemic, as for nowcast estimation, has restricted the influences of different biases. These projections, however, are especially volatile at times of change (for example, when a wave is turning over) and they cannot predict precise timings or scale of peaks. During COVID-19, these sorts of projections were particularly useful as hospitalisations increased substantially in autumn 2020. SPI-M-O’s combined projections showed that, without policy or behaviour change, the number of daily hospital admissions in England could match or surpass those seen in spring 2020.[footnote 13], [footnote 14] They were less useful during times when policies changed frequently.

  2. Medium-term scenarios are a variant of these projections that were developed to understand potential futures when a policy was changed. The scale of any potential change on, say hospitalisations or deaths, is unknown until it is observed in the data and so, to investigate this, multiple different R values were stipulated from a given date and modelled forward for a given length of time. These were combined from different models (at least 3). The resulting combinations then provided a possible envelope for future trajectories that could support discussions about how big a change in transmission might be ‘manageable’. These were particularly useful during the roadmap out of lockdown in spring 2021.[footnote 15], [footnote 16], [footnote 17], [footnote 18] As each step of the roadmap was taken, it was possible to see in advance what range of outcomes that step might lead to but also, as data accumulated after the step was implemented, which broad trajectory the change may actually have led to.

  3. Scenarios were generated from transmission dynamic models that range from simple to large and complex. These analyses consider how the future could turn out under different sets of assumptions, extending out over several weeks and even months. These scenarios are often misunderstood as predictions but cannot be due to the number of assumptions that need to be taken, both in terms of model parameters, biological assumptions (for example, how effective vaccines would be), what policy decisions may be taken in future, and how people may behave. These last 2 heavily influence one another and, while behaviour can be incorporated into modelling, calibrating this can be incredibly difficult and it changes over time. Some assumptions were provided to modellers by policy officials – for example, assumptions on the speed of rollout of vaccinations – while others were left to modellers’ expert judgement – for example, vaccine effectiveness before real-world data were available. Different model outputs were not combined, but rather insights drawn from differences between scenario runs.

These models were most useful when used to determine which variables the trajectories were most sensitive to (and therefore where the pandemic response should be focused) and the broad order of magnitude of future changes that might be expected. They are highly complex analyses that require interpretation by specialists who can distil the key high-level principles relevant for policy and decision-makers. This sort of modelling particularly influenced both the development of and the decisions taken during the roadmap out of lockdown. For example, such analyses showed that:

  • effective widespread population immunity was almost certainly unachievable through vaccination alone

  • therefore a large wave of infections was highly likely at some point as restrictions were lifted (an exit wave)

  • extremely high vaccination coverage in older age groups was needed before all restrictions were removed

  • while at the time it was impossible to know either how effective vaccines would be against transmission and severe disease, or how people’s behaviour would change as restrictions were lifted, modelling showed that both were key to the future of the epidemic. These could only be known after there had been enough time for data to accrue and therefore these insights led to a key recommendation that the release of measures should be based on the data rather than particular dates[footnote 19], [footnote 20], [footnote 21]

Each of these findings was borne out in practice.

Models do not and cannot predict what is going to happen. They can only illustrate potential futures. Modelling can extrapolate trends based on input data and assumptions, but it is extremely difficult for them to call precisely when growth may turn into decline, and vice versa, as is estimating exactly how high or low that peak or trough might be. There has been substantial pressure, throughout the pandemic, to ‘predict’ what might happen next and so communication that this is not the purpose of modelling has been vital.

General limitations of epidemiological modelling

For models to provide the best insights, good data are required. If data entering models are of poor quality, then the models’ results will be too. There needs to be a diverse range of data, collected from different sources, using different methodologies, that is available to all modellers. When data have been lacking, assumptions were required to fill the gaps – these unknowns may be biological, sociological, or related to policy.

Data will always be lacking in the early phases of an epidemic or wave with a new variant, and this in particular was a major limitation for epidemiological modelling early in the pandemic. Robust modelling was not possible until reliable data were available. Speed of access to data is also important, as lagged data mean that models will be out of date when they are produced.

As more and more factors and/or heterogeneities have been included, models have become more complicated and data hungry. Population mixing and disease risk are very heavily age and space-related, making age and geography important data variables for many models.

For example, as immunity builds up, it significantly affects transmission, so vaccination status and previous infection status needed to be included. With each additional dimension included, the models’ data needs increase exponentially as a power of the models’ complexity, as does computing resource requirement and the potential for coding errors. Such complexity is partly determined by the epidemiology but also by the questions asked of modelling. For example, as the pandemic progressed, some SPI-M-O participants began modelling at very granular scale geographies using the index of multiple deprivation (IMD). With access to the right data, future modelling could consider more socio-economic factors and the resulting impact on outcomes.

Flexibility is crucial as it will not be possible to preempt all the data that will be needed in advance. For example, at the very start of the COVID-19 pandemic, it was not anticipated that mobility data, vaccine rollout plans and testing data would become such central data sources. Fast-tracking access to new data streams as their importance becomes obvious is crucial and requires significant cross-organisational working to identify such data and implement the necessary logistics for access.

Infectious disease modelling is also not a tool that can balance direct disease burden with other harms, such as the economic and social impacts of policy decisions or interventions. It cannot and should not replace other disciplines or the interrogation of data.

How epidemiological modelling was managed in this pandemic

The way modelling was used, and its limitations, during this pandemic is illustrative of options in future pandemics and epidemics. From the second meeting of the Scientific Advisory Group for Emergencies (SAGE) that considered COVID-19, the Scientific Pandemic Influenza group on Modelling was put on an operational footing, as a subgroup (SPI-M-O) reporting exclusively through SAGE. This allowed for an expansion in the number of academics providing support to the government response and increased the diversity (of models, modelling approaches, data and assumptions used, experience, academic institutions) of the group, and for a wider range of observers from government departments and the devolved administrations to attend and understand the principles and evidence derived from modelling.

SPI-M-O acted to draw together results and insights across the various individual models and the significant expertise and experience of its participants to provide a consensus position. This scientific evidence was then used to inform SAGE advice, which was then used to inform policy.

Generally, SPI-M-O (and SAGE) took a UK-wide approach to COVID-19. As policy development considered different spatial scales and as the epidemic spread at different speeds across the UK, models that considered different nations, regions or even smaller geographical areas became more and more useful. For example, in Northern Ireland case rates and variant spread often more closely matched the Republic of Ireland – as when it experienced a wave of the BA.2 variant ahead of the other 3 UK nations in early 2022. As the pandemic progressed, all 4 nations of the UK adapted their modelling approaches to take account of differing epidemiology and policy questions:

  • in Wales, modelling from 2 Welsh universities contributed to their response – this was commissioned by the Technical Advisory Cell and the outputs reviewed by the Technical Advisory Group[footnote 22]

  • modellers from both Scottish Government and a range of universities across the UK and further afield developed models for use in Scotland. Estimates and projections from these were used throughout the pandemic using Scottish-specific data and parameters. These were used to inform the Scottish response and fed into SPI-M-O cross-UK estimates. Cross-UK estimates in turn informed weekly updates modelling the epidemic in Scotland.[footnote 23] Scottish modelling groups worked with SPI-M-O participants to develop specific modelling tools for Scotland – for example, on establishing local authority projections

  • in Northern Ireland, a modelling group was established by the Department of Health and a lead modeller was brought into the Public Health Agency to produce modelling estimates using more locally relevant parameters at pace. These were supported by academics and public health specialists. These were compared with SPI-M-O modelling to refine them and see where differences were arising, and were published as weekly summaries for the public in their R Paper[footnote 24]

Dialogue between UK-wide and devolved administration modelling efforts continued throughout the pandemic, with SPI-M-O’s individual academics or academic groups sitting on the above advisory groups, and providing what became standard products (nowcasts, short-term forecasts and medium-term projections) for the 4 UK nations where possible.

Modelling is considerably more robust when more than one model (ideally a minimum of 3) is considered and a consensus is built and agreed across a broad community. If the models give the same message, there is greater faith in the results. If they give different results, it is an opportunity to understand why and emphasises the uncertainty.

The consensus approach also acts as quality assurance, lowering the risk of spurious results due to coding errors or biases within an individual model. The modelling evidence provided as a consensus reduces the profile of the quantitative results and emphasises the qualitative insights.

A variety of different approaches and sensitivity analyses also allows for consideration of a problem from several different perspectives – for example, large complex transmission dynamic models may allow for a level of detail that is not possible from simpler models, or different structures might allow trends at, say, lower tier local authority level to be investigated. Generating a consensus does take more time but leads to significantly more robust results.

Alongside consensus, diversity of inputs and approaches has enabled challenge which has been an important part of the process. This has come from within the committee itself in a rapid review process, from within government (while maintaining academic independence), and from external sources as analyses were released into the public domain and externally peer reviewed.

During the pandemic, some countries such as Denmark, the Netherlands and Australia have drafted technical modelling expertise into governments, whereas the UK has been almost unique with modelling conducted externally, yet publicly available and informing government policies. In particular, the strength in depth of the UK’s academic community has been and is a huge asset. COVID-19 has demonstrated the importance of:

  1. Effective policy-modelling dialogue: early in the pandemic, requests for modelling to SPI-M-O were framed in ways that focused on ‘predicting the future’ rather than considering what high-level insights and principles that modelling could provide. There was a risk that policymakers wanted and expected greater certainty than is possible from modelling, especially of future events. As the pandemic progressed, understanding grew of what infectious disease modelling can and cannot do. Combining this with an analytical coordination hub at the centre of government led to commissions becoming more appropriate (both in terms of content and timelines), with the roadmap out of lockdown being an excellent example of where appropriately tailored modelling requests led to invaluable evidence to support decision-making. A government co-chair of SPI-M-O with extensive understanding of academic modelling, as well as the government’s strategic questions, also facilitated this open dialogue.

  2. Diverse range of models and modelling groups: at the start of COVID-19, the larger SPI-M-O modelling groups were able to quickly flex resources to the pandemic, while smaller groups could not at the same pace. This made building consensus difficult as individual groups’ results could not be subject to the same breadth of quality assurance from multiple contributors that became the norm later in the pandemic. As the pandemic progressed, smaller groups became more able to contribute, improving the resilience of the modelling community as well as the consensus process and diversifying the models available, and thus the insights available to government.

  3. Focusing academic expertise appropriately: as COVID-19 emerged in the UK, many modelling groups were extensively involved in monitoring the epidemic, as well as modelling potential futures. As government started developing its extended capabilities in summer to autumn 2020, divisions of responsibility could become much clearer and allowed for better management of SPI-M-O’s extensive expertise and for prioritising their time accordingly.

Communication of epidemiological modelling

Modelling is a complex process that requires careful interpretation and explanation of highly technical outputs to both decision-makers and the public. It is likely that senior clinical and scientific advisers will need to clearly communicate modelling outputs for future pandemics and epidemics. Experiences during COVID-19 have reinforced some important principles:

  1. The craving for certainty of what is to come, particularly in the early stages of a pandemic, may mean that model outputs are seen as ‘the answer’, which they can never be. Policy decisions, however, should be based on several considerations, and infectious disease modelling outputs are only one source of scientific evidence.

  2. Clarity about the uncertainties, both from models’ outputs and the wider strategic and evidence context, helps decision-makers and the public understand the key principles and insights that can and cannot be drawn from modelling. This needs consistent communication of the limitations of epidemiological modelling, the dependence on assumptions, and when it is best used, in collaboration with modelling experts. Policymakers are often comforted by being able to see a line on a graph purporting to show what will happen under a given policy, but modelling will never be able to precisely predict the future.

  3. Setting out the assumptions underpinning models and summarising what may happen if or when these change helps to demonstrate how modelling outputs may also change. Managing these uncertainties alongside the pressure to present results simply and concisely has been a delicate balance during the pandemic. The sometimes large differences between individual models were due mainly to differences in assumptions.

  4. All SPI-M-O modelling that fed into policymaking through SAGE was made publicly available. As well as the benefits to the public of this transparency, this greatly improved the modelling itself. However, the public mostly experienced this work through filters such as the press or social media, which invariably focus on the most extreme results, even when a range is reported and appropriately caveated. For example, in autumn 2020 SPI-M-O modelling groups conducted preparatory work to support planning for winter and development of a new reasonable worst-case scenario iteration. Four modelling groups’ scenarios were considered.[footnote 25] However, the most pessimistic trajectory of the 4 was focused on by many outlets. Proactive engagement through appropriate experts and relevant sector press is important to avoid unintentional misinterpretation of outputs.

Reflections and advice for a future CMO or GCSA

Point 1

Modelling is just one tool of many that can be used to understand the situation and be taken into account in decision-making.

A wide range of data and evidence must be used, alongside modelling. Complete data is ultimately more helpful than models.

Point 2

A range of types of modelling and analysis may be needed in the future.

During the COVID-19 pandemic, SPI-M-O focused on epidemiological modelling to help assess the potential direct health impacts of the virus. Others were responsible for different aspects of evidence, such as economic and societal analysis, and assessing indirect health impacts. Future decision-makers in local and national government may need to use a combination of such tools to balance decisions about future policy choices and the associated opportunity costs.

Point 3

Modelling is not forecasting.

It proved difficult to communicate this important distinction to decision-makers, the press and the public.

Point 4

Epidemiological modelling is most useful for looking at ‘what if…’ questions in the form of scenarios.

For example, what if the number of contacts people have were to halve, or vaccines were to reduce the chance of infected individuals requiring hospitalisation by two-thirds? This sort of modelling is good at identifying which factors will have the biggest impact on the course of the pandemic, but is also the most intensive and complex to run.

Point 5

The SPI-M-O secretariat played a vital role in bridging the gap between expert modellers and policymakers.

Secretariat staff:

  • had experience in both policy analysis and epidemiology
  • were empowered to shape the modellers’ programme of work (ensuring outputs were the most relevant for policy teams while maintaining a sustainable modeller workload)
  • helped interpretation of modelling results to policymakers, scientific advisers and the wider public

References

  1. SPI-M-O: Consensus view on the impact of possible interventions to delay the spread of a UK outbreak of 2019-nCov. SAGE 4, 4 February 2020. 

  2. SPI-M-O: Consensus view on public gatherings, SPI-M-O: Consensus view on the impact of mass school closures, SAGE 6, 11 February 2020. Available from: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-public-gatherings-11-march-2020 

  3. SPI-M-O: Consensus view on the impact of mass school closures, SAGE 9, 20 February 2020. Available from: https://www.gov.uk/government/publications/spi-m-o-consensus-view-on-the-impact-of-mass-school-closures-19-february-2020 

  4. SPI-M-O: Consensus statement on 2019 novel coronavirus (COVID-19), SAGE 4, 4 February 2020. Available from: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-2019-novel-coronavirus-covid-19-3-february-2020 

  5. SPI-M-O: Consensus statement on 2019 novel coronavirus (COVID-19), SAGE 6, 11 February 2020. Available from: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-2019-novel-coronavirus-covid-19-10-february-2020 

  6. SPI-M-O: Consensus statement on 2019 novel coronavirus (COVID-19), SAGE 9, 17 February 2020. Available from: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-2019-novel-coronavirus-covid-19-17-february-2019 

  7. SPI-M-O: Consensus statement on 2019 novel coronavirus (COVID-19), SAGE 12, 3 March 2020. Available from: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-2019-novel-coronavirus-covid-19-2-march-2020 

  8. Estimating the infection and case fatality ratio for COVID-19 using age-adjusted data from the outbreak on the Diamond Princess cruise ship, Russell et al. 23 March 2020. Available from: https://cmmid.github.io/topics/covid19/diamond_cruise_cfr_estimates.html 

  9. NERVTAG/SPI-M: Extraordinary meeting on SARS-CoV-2 variant of concern 202012/01 (variant B.1.1.7), available from: https://www.gov.uk/government/publications/nervtagspi-m-extraordinary-meeting-on-sars-cov-2-variant-of-concern-20201201-variant-b117-21-december-2020. 

  10. SPI-M-O: Consensus statement on COVID-19, SAGE 74, 22 December 2020. https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-covid-19-22-december-2020 

  11. SPI-M-O Consensus statement on COVID-19, SAGE 98, 7 December 2021. https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-covid-19-7-december-2021 

  12. Estimates of R for the UK, England and NHS England Regions (https://www.gov.uk/guidance/the-r-value-and-growth-rate), Scotland (https://www.gov.scot/collections/coronavirus-covid-19-modelling-the-epidemic/), Wales (https://gov.wales/node/30180/latest-external-org-content), and Northern Ireland (https://www.health-ni.gov.uk/r-number) 

  13. SPI-M-O: COVID-19: Medium-term projections explainer, 31 October 2020. https://www.gov.uk/government/publications/spi-m-o-covid-19-medium-term-projections-explainer-31-october-2020 

  14. SPI-M-O weekly medium-term projections from 17 September 2020 to 23 March 2022. https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-covid-19-17-september-2020 

  15. SPI-M-O Summary of further modelling of easing restrictions – Roadmap Step 2, SAGE 85, 31 March 2021. https://www.gov.uk/government/publications/spi-m-o-summary-of-further-modelling-of-easing-restrictions-roadmap-step-2-31-march-2021 

  16. SPI-M-O: Summary of further modelling of easing restrictions – Roadmap Step 3, SAGE 88, 5 May 2021. https://www.gov.uk/government/publications/spi-m-o-summary-of-further-modelling-of-easing-restrictions-roadmap-step-3-5-may-2021 

  17. SPI-M-O: Summary of further modelling of easing restrictions – Roadmap Step 4, SAGE 92, 9 June 2021. https://www.gov.uk/government/publications/spi-m-o-summary-of-further-modelling-of-easing-restrictions-roadmap-step-4-9-june-2021 

  18. SPI-M-O: Summary of further modelling of easing restrictions – Roadmap Step 4 on 19 July 2021, SAGE 96, 7 July 2021. https://www.gov.uk/government/publications/spi-m-o-summary-of-further-modelling-of-easing-restrictions-roadmap-step-4-on-19-july-2021-7-july-2021 

  19. SPI-M-O: Summary of modelling on easing restrictions, SAGE 79, 4 February 2021. https://www.gov.uk/government/publications/spi-m-o-summary-of-modelling-on-easing-restrictions-3-february-2021 

  20. SPI-M-O: Summary of modelling on scenario for easing restrictions, SAGE 80, 11 February 2021. https://www.gov.uk/government/publications/spi-m-o-summary-of-modelling-on-scenario-for-easing-restrictions-6-february-2021 

  21. SPI-M-O: Summary of modelling on roadmap scenarios, SAGE 81, 18 February 2021. https://www.gov.uk/government/publications/spi-m-o-summary-of-modelling-on-roadmap-scenarios-17-february-2021 

  22. Terms of reference: Technical Advisory Cell, GOV.WALES, available at: https://gov.wales/technical-advisory-cell/terms-reference 

  23. Coronavirus (COVID-19): modelling the epidemic - gov.scot (www.gov.scot). https://www.gov.scot/collections/coronavirus-covid-19-modelling-the-epidemic/ 

  24. R Number papers, Department of Health (health-ni.gov.uk). https://www.health-ni.gov.uk/R-Number 

  25. SPI-M-O: COVID-19: Preparatory analysis long term scenarios, 31 October 2021. https://www.gov.uk/government/publications/spi-m-o-covid-19-preparatory-analysis-long-term-scenarios-31-october-2020