Skip to content

By Roger Highfield on

Coronavirus: Virtual Pandemics

Lockdowns and other stringent measures have been introduced in the wake of computer predictions about the pandemic. Roger Highfield, Science Director, looks at one of most advanced COVID-19 models, and results released today from the most sophisticated analysis of how it works.

Computers are changing the way we do science. So-called ‘digital twins’ – virtual versions of the real thing – are common in engineering. Virtual organs, even virtual humans, are also under development. We can create a ‘virtual population’ too, and study how a virtual virus spreads through it. Our reliance on these models for understanding how to curb the pandemic is the subject of my latest COVID-19 blog.

Computer models are far from perfect but, even so, are critical for managing the pandemic, so much so that the Royal Society, the world’s oldest independent scientific academy, created the Rapid Assistance in Modelling the Pandemic (RAMP) initiative to check their validity.

I talked to Neil Ferguson, Director of the MRC Centre for Global Infectious Disease Analysis at Imperial College London, the UK’s best-known modeler, along with the head of the team assessing Imperial’s model for the Royal Society, Peter Coveney of UCL and the University of Amsterdam, who leads the VECMA initiative which studies when we can trust computers (and when we can’t).

Replica of the 'Baby' or SSEM computer, built by the Computer Conservation Society in 1998.
Replica of the ‘Baby’ or SSEM computer, built by the Computer Conservation Society in 1998. The original ‘Baby’ was the world’s first stored-program computer and ran its first program on 21 June 1948

HOW DO COMPUTERS GET THE MEASURE OF THE PANDEMIC?

There are many ways computers are used to weigh up the pandemic: ‘Much of what we do is simple statistical analysis, informed by what we understand about epidemics,’ said Prof Ferguson.

Statistics is the means of extracting information, illumination, and understanding from data, often in the face of uncertainty.

‘For example, in the past few days, we have been looking at whether we can see the effect of the three-tier system of measures introduced by the Government. That means trying to correlate the introduction of a tier with changes in R, the ‘reproduction number’’ (When R is above one, the spread of COVID-19 accelerates. Under one, the pandemic decelerates.)

As well as this data-driven approach, Imperial uses models, abstract mathematical representations of the real world. Typically, they use mathematical formulae – differential equations – that can model change, for instance, the virus spreading through the population.

To run in a computer, these equations are turned into algorithms, named after the Latinized versions of the name (and most famous book title) of the influential Persian scholar, Muhammad ibn Mūsa al-Khwarizmī (780-850).

The algorithm named in his honour is a sequence of instructions that enables a computer to put a formula, model, or theory through its paces to reproduce how the world works in a simulation.

The potential of running an algorithm on a machine was first glimpsed long ago, by mathematician Ada Lovelace 1815-1852), who in 1843 talked of ‘a new, a vast, and a powerful language…for the future use of analysis.’

In her day, the Jacquard loom had revolutionised the production of patterned cloth and inspired the development of early computing. The link came through the loom’s reliance on interchangeable cards, upon which small holes were punched, which held instructions for weaving a pattern. Lovelace realised the Analytical Engine, the first modern computer design, which took its instructions from punch cards, ‘weaves algebraic patterns, just as the Jacquard loom weaves flowers and leaves.’

Watercolour portrait of Ada Lovelace
Watercolour portrait of Ada Lovelace, possibly by A E Chalon, around 1840. Part of the Science Museum Group Collection.

HOW DOES IMPERIAL MODEL A PANDEMIC?

Imperial uses eight models to model COVID-19, which differ according to the amount of detail, the circumstances, and the data fed into them, for instance, one focuses on care homes. Reassuringly, they give similar overall results, said Prof Ferguson.

The most sophisticated, Imperial’s CovidSim simulation, tiles the country with a network of cells. These are primed with all sorts of data – high-resolution population density data, notably age, household structure and size, ‘which is particularly important,’ said Prof Ferguson, along with information on schools, universities and workplaces.

Random number generators are used to capture the vagaries of real life such as, for example, how getting closer than 2m to another person increases the probability of getting the disease. ‘CovidSim is regarded as the most sophisticated model that there is,’ commented Prof Coveney.

Even more detailed models can be used for the behaviour of individual people but, as Prof Ferguson pointed out, the more granular the details, the more computer power and the harder they are to run.  They tend to be used less frequently, to check the results of simpler models.

Other teams in the UK use similar approaches, though draw on different streams of data about the pandemic: the London School of Hygiene and Tropical Medicine, University of Warwick, University of Manchester/Oxford/Lancaster, University of Cambridge MRC Biostatistics Unit and Public Health England. These groups come together in the UK government’s SPI-M committee, a subgroup of the Government’s Scientific Advisory Group for Emergencies, or SAGE.

WHAT PREDICTIONS LED TO THE LATEST LOCKDOWN?

Here you can see the SP-M medium term projections for England’s hospitalisations/deaths – based on the trends before the latest lockdown.

An error crept in when projections were being prepared by civil servants for a Number 10 briefing at the end of last month, due to a bug in a spreadsheet, leading to the upper bound on the range of the projections being inflated above the first wave. However, even when the error was subsequently corrected, the numbers painted a grim picture, with a possible death rate similar to that in the first wave.

Mary Gregory, Deputy Director for Regulation of the Office for Statistics Regulation, remarked that where models inform significant policy decisions, the model outputs, methodologies and key assumptions should be published at the same time. “In the press conference on 31 October, this was not the case. The Prime Minister referred to the reasonable worst-case scenario – a model set up to support operational planning. However, the data and assumptions for this model had not been shared transparently.”

HOW LONG HAVE WE USED COMPUTER MODELS?

Computer models and simulation developed in tandem with the rapid growth of computer power, following the Manhattan Project in World War II, notably in the Los Alamos National Laboratory, New Mexico, where the Polish-American mathematician Stanislaw Ulam (1909-1984) used a computer to simulate a random process, that of a nuclear chain reaction, through what became known as the Monte Carlo method.

This method uses random number generators (akin to the randomness observed in the games played in the casinos of Monte Carlo, think of the roulette wheels) and results are extracted by averaging multiple runs – so-called ensemble averaging. In the case of Imperial’s CovidSim, it uses four random number ‘seeds.’

The Science Museum Group has many pioneering computers from the dawn of modelling in its collections, from Alan Turing’s Pilot ACE computer and the Manchester Small-Scale Experimental Machine, or ‘Baby’, the first computer to store and run a program, to the Cray 1a supercomputer and IBM Blue Gene/P supercomputer.

Cray 1a Supercomputer, serial number 11 on display in the Science Museum.
Cray 1a Supercomputer by Cray Research Inc. On display in the Science Museum.

HOW GOOD ARE COMPUTER MODELS?

All models rely on assumptions and even if they are correct, people may change their behaviour, and circumstances may also change, so these assumptions no longer hold true. As the old joke goes: ‘all models are wrong, but some are useful’.

HOW USEFUL ARE MODELS?

By plugging in the latest information, computer models allow scientists to predict the future course of the pandemic and, importantly, test various measures to contain the spread of the virus, from school closures to face masks. In this way, they can help policymakers investigate ‘what if?’ scenarios.

‘Understanding of how infections spread in populations, the impact of different interventions and immunity, impact on healthcare all benefit from a quantitative approach and modelling,’ said Prof Ferguson.

But there are limits and the importance of models should not be overstated, he added: ‘We don’t have such a detailed model of transmission that we can, for example, model the effects of the ‘rule of six.’ Moreover, he said it is important to back the models with ‘epidemiological intuition,’ that is, the insights that have come from epidemiology, the science that deals with the spread and control of diseases.

WHY DOES MATHEMATICAL MODELLING WORK?

The power of mathematics to represent nature has been apparent since Pythagoras (570-495 BC) declared that “all is number.”. Theoretical physicist Eugene Wigner famously talked of the ‘unreasonable effectiveness’ of mathematics. However, we don’t really know why it works.

DID THE MODELS SEE THE SECOND WAVE COMING?

‘The second wave was entirely predictable,’ said Prof Ferguson.

A saw-toothed profile of outbreaks and lockdowns can be seen in a computer model presented to Government in late February/early March by the London School of Hygiene and Tropical Medicine. The same goes for a mid-March computer model (see figure 4), by Prof Ferguson and his colleagues. At that time, the epidemic was doubling every three or four days, before controls were introduced, whereas today it is more like every 10 or 20 days.

In July, the Academy of Medical Sciences predicted a worst-case scenario in which the estimated total number of hospital deaths (excluding care homes) between September 2020 and June 2021 would be 119,900, over double the number occurring during the first wave in spring 2020.

‘However,’ commented Prof Ferguson, “all European countries have struggled with how, having thought the epidemic could be contained locally, with local measures, nobody has yet found the rules to allow us to control transmission of the virus and yet be as liberal as we can be.’

HOW INFLUENTIAL ARE COMPUTER MODELS?

Not as influential as they should be. On 11 March UK epidemiologists were agreed that a lockdown was the only option to ensure the NHS could cope. However, the country only locked down on 24 March.

In response to a surge in cases, the Scientific Advisory Group on Emergencies (SAGE) on 21 September recommended various measures, including a ‘circuit breaker’, a short lockdown. However, only from 5 November did the Government introduce a less draconian lockdown.

‘This time around, however, there are deeper insights into the economic and social damage’, said Prof Ferguson. As a result, policymakers find it harder to get consensus among policymakers on the need for a lockdown without evidence of higher death rates.

If, for example, a short lockdown had been successfully used in September, critics would have complained that the resulting low death rate was evidence this stringent measure had been unnecessary. ‘Proving a counterfactual, what would have happened if you had not acted, is extremely difficult,’ said Prof Ferguson. ‘The delay this time has cost lives, though not as many as in March.’

‘As for beyond this lockdown, expect the tiers to return, perhaps with ‘tier three-plus,’ he said.

HOW DO WE ENSURE COMPUTER MODELS WORK?

Success depends on understanding the basic processes by which the virus spreads, having mathematics – in the form of algorithms that can run on a computer – to capture this behaviour; and good data, in other words that is timely, complete and accurate.

In the real world, however, different studies on different populations produce different insights and can even produce different results. Different countries and regions also collect data in different ways. Basic factors, such as the fatality rate, depend on the percentage of elderly people, incidence of diabetes, degree of obesity, which can vary from place to place. There are also a lot of people walking around with COVID-19 who don’t know it.

HOW DO WE TEST THESE MODELS?

Imperial’s CovidSim model has now been studied by the €4M EU funded VECMA (Verified Exascale Computing for Multiscale Applications) research project, a team from University College London, Centrum Wiskunde & Informatica in the Netherlands, Brunel University London, Poznań Supercomputing and Networking Center, Poland and the University of Amsterdam.

VECMA is developing validation methods for software and models for the next generation of supercomputers, known as exascale computers, which are capable of around a million, million, million floating point operations per second – that is 1000 times more powerful than current machines. ‘Modern life increasingly depends on highly complex computer models and it is critical that these are validated, from the code to the data fed into them,’ said Prof Coveney.

To study CovidSim they investigated how the uncertainties in these parameters affected the predictions, running the model thousands of times in the Eagle supercomputer from the Poznan Supercomputing and Networking Center in Poland.

‘We were keen to do what we call uncertainty quantification, where you change them all by small amounts and see what happens,’ said Prof Coveney. ‘You need to assume this cloud of uncertainty, partly because of the randomness of infections and also because the model has all these uncertainties, which we found are amplified by 300 per cent in the Imperial model.’

The distribution of deaths predicted by the model is not what scientists call a normal distribution (a gaussian or bell-shaped curve) but lopsided, which led to the underestimate. However, Prof Ferguson said that lopsided distribution is expected when you use a measure like a lockdown to interrupt exponential growth. ‘That was not a surprise to us.’

Prof Coveney commented on what this wide, lopsided distribution meant for the model’s predictions: ‘When you take into account the likely uncertainties in the figures used for the calculations, the predictions made in their influential March report, which led to lockdown, underestimated very substantially the number of deaths that actually occurred in the U.K. – around 10,000 after 800 days, compared with our estimate of around 20,000’, he said. ‘If you look at the distribution of possible results around this mean, the mortality could have been up to six times higher.’

‘Although Prof Ferguson has been demonized as ‘Professor Lockdown, he was probably underplaying the seriousness of the situation, he added. ‘However, had the prediction been more accurate, showing the true extent of the threat, perhaps it would have prompted faster action.’

Prof Ferguson responded: ‘It is, of course, right that in the week after we released our March report it became apparent (with the starting of systematic hospital surveillance) that the UK epidemic was more advanced than previously thought. The simulation does a fair job at reproducing the trajectory if given suitable parameters for the timing of interventions relative to the epidemic stage.’

Disposable face mask, paper
Disposable face mask, paper, British, 1940-1960.

HOW COMPLICATED IS THE IMPERIAL MODEL?

The Imperial model has 940 parameters. By the standards of physical scientists, that is a colossal number. Physicists love simplicity, as expressed by their old joke: “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”.

Prof Ferguson said that the vast majority of the parameters fed into the Imperial model – up to 800 – describe the human population and demography, household size distributions, hospital demand and so on. ‘At the core of the model, with no interventions, are just six or seven parameters.’

In other words, although the Imperial model appears to be one of baroque complexity, at its heart it is relatively simple.

DID THEY TEST EVERY PERMUTATION OF THE MODEL?

No. With 940 parameters the VECMA team faced what is called the ‘curse of dimensionality’ – there are so many permutations that even a supercomputer can’t explore them all.

‘With the kind help of Prof Ferguson, my colleague Wouter Edeling of the Centrum Wiskunde & Informatica in the Netherlands and the team focused on 60 that looked of most interest,’ said Prof Coveney. ‘However, when we put the model through its paces and figured out how sensitive it was only 19 dominated the predictions.’

A systematic ‘sensitivity analysis’  showed that of these 19 just three parameters account for 60% of the overall variance in the model’s predictions; two related to the latent period (this is an initial period of the incubation time, where you are infected but not yet able to infect others), and the third to the assumed effectiveness of social distancing.

HOW COULD THE IMPERIAL MODEL BE IMPROVED?

Even though there are more than 900 parameters, the Imperial model still does not cover everything, such as the spread of viruses in hospitals and care homes (which have since been introduced to the model). Some of its many parameters are not used for good reason – those linked with a vaccine – and others can be introduced indirectly.

For example, masks can be represented in the current code as a form of social distancing, said Prof Ferguson.

‘Even so, with that number of parameters, you can almost – with the benefit of hindsight – make it fit any curve you like’ said Prof Coveney. ‘What we suggest is we adopt an approach pioneered in the UK by Tim Palmer of the University of Oxford for weather forecasting, where modellers use stochastic parameterization (in other words, they add realistic randomness to the numbers they feed into the models) and ensembles, that is they run the model many times – with different parameters – and then talk about the probability of certain scenarios occurring. You accept you have an imperfect picture of the pandemic and couch your predictions in these terms.’

‘Because it lacks many realistic factors, the predictive ability of the Imperial model is open to question and I think better expressed as a spread of possibilities, not just one answer. Even if a model is accurate, it is always better to express its predictions as a spread or distribution than a single outcome.’

Prof Palmer told Prof Coveney that the unusual fat-tailed distribution seen in CovidSim is reminiscent of that seen in climate sensitivity, when working out the possible warming caused by a doubling of carbon dioxide levels from pre-industrial times. ‘Climate policy is strongly predicated on the existence of this long tail, and it makes sense that COVID policy should be too,’ said Prof Palmer.

Prof Ferguson said that the models the Imperial team is using now do exactly this – within what is termed a “Bayesian” approach – representing key parameters not as single values, but as probability distributions which represent uncertainty, then statistically fitting those models to data to reduce that parameter uncertainty.

The UK SPI-M committee then combines the outputs of five to ten different models by many different teams in a systematic manner, to produce a distribution of predictions which can be combined to create an ensemble, which is more accurate.

However, Prof Coveney said this still falls short of a true ensemble approach. ‘The problem is these models, though simple in one way, have lots of moving parts, and have not been seriously validated before. And on top of that, the practitioners do not know how to quantify their model’s uncertainty.

Combining different models seems reassuring but, ideally, you should only do this after you have established reliable – reproducible – measures of the output of each one rather than mix one-off predictions, which will be of limited value.’

HOW ELSE CAN WE IMPROVE COMPUTER FORECASTS?

One way suggested by Prof Coveney is to create more nimble models that focus on the most influential parameters and then finesse them with a technique called Bayesian Optimisation, a method named after the 18th-century Presbyterian minister Thomas Bayes, who devised a systematic way of calculating, from an assumption about the way the world works, how the likelihood of something happening changes as new facts emerge.

This offers a way to infer the validity of hypotheses based on prior knowledge, so one can hone the parameters fed into the model. That approach is now being tried by the Imperial team using simpler models, said Prof Ferguson.

HAS THE NEW CRITIQUE OF IMPERIAL’S MODEL BEEN PEER REVIEWED?

Not yet. ‘Because these models are being used to justify a second lockdown, I think it is important that they are openly discussed now and this can’t wait for the traditional peer review process used by scientific journals, where a paper is refined or even rejected based on anonymous comments,’ said Prof Coveney. “Some scientific disciplines, notably in physics, often publish work as a preprint before peer review to open up discussion.

In this case, you can find our study of the Imperial model on a preprint server, called Research Square, a move encouraged by my colleagues leading the RAMP initiative at the Royal Society, who have strongly encouraged this step for the same reasons (to promote public discussion).

At the end of the day, I want to contribute to a debate about the role of modelling in public policy, and in a timely manner.’

WHAT IS THE VERDICT?

The Imperial model has also been checked to ensure it is reproducible and been tested with mortality data by Graeme Ackland and colleagues at the University of Edinburgh.

The team calibrated the model with the known state of the epidemic in March and its predictions corresponded reasonably well with reality, though this falls short of the statistical analysis done by VECMA. They also concluded that “a similar second wave will occur later this year if interventions are fully lifted.”

Given some of the controversy over modelling, Prof Ferguson said that it is important other teams study his model, and he welcomed independent validation.

HOW CAN I FIND OUT MORE?

The latest picture of how far the pandemic has spread can be seen on the Johns Hopkins Coronavirus Resource Center or Robert Koch-Institute website.

You can check the number of UK COVID-19 lab-confirmed cases and deaths along with figures from the Office of National Statistics.

There is much more information in our Coronavirus blog series (including some in German by focusTerra, ETH Zürich, with additional information on Switzerland), from the UKRI, the EUUS Centers for Disease ControlWHO, on this COVID-19 Preview Changes (opens in a new tab)portal and Our World in Data.

The Science Museum Group is also collecting objects and ephemera to document this health emergency for future generations.