It turns out that there is a complex backstory to the science involved in “follow the science”.
It involves personality clashes (scientists are not immune) and perhaps the more important connection to perceived failures to control the Foot and Mouth epidemic in 2001 and Swine ‘flu in 2009. These involved members of the current team of scientists. Inquiries in both cases were conducted, concluding:
“Modelling did not provide early answers,” it concluded. “The major difficulty with producing accurate models was the lack of a relatively accurate idea of the total number of cases . . . This is not to reject the use of models, but to understand their limitations: modellers are not ‘court astrologers’.”
The same failure to gather data in this pandemic is highly likely to be part of the public inquiry that must surely follow. The most vital data still missing is the proportion of people who have already been infected — a number that would instantly make the modelling far more reliable, including telling us when the lockdown might end.
Jonathan Leake, Science Editor www.thetimes.co.uk
The Royal Society is to create a network of disease modelling groups amid academic concern about the nation’s reliance on a single group of epidemiologists at Imperial College London whose predictions have dominated government policy, including the current lockdown.
It is to bring in modelling experts from fields as diverse as banking, astrophysics and the Met Office to build new mathematical representations of how the coronavirus epidemic is likely to spread across the UK — and how the lockdown can be ended.
The first public signs of academic tensions over Imperial’s domination of the debate came when Sunetra Gupta, professor of theoretical epidemiology at Oxford University, published a paper suggesting that some of Imperial’s key assumptions could be wrong.
Her decision to publish highlighted academic rivalries between the epidemiology groups at Oxford and Imperial. These date back two decades to when Gupta, then a junior researcher at Oxford, lodged a complaint against her head of department, Professor Sir Roy Anderson, which saw him leaving the university. He is now professor of infectious disease epidemiology at Imperial. Now other researchers have raised different concerns— saying Imperial’s modelling, while high quality, needs to be checked and replicated by others.
Mike Cates, who has succeeded Stephen Hawking as Lusasian professor of mathematics at Cambridge and is leading the Royal Society project, said his concerns were partly that the Imperial team, led by Professor Neil Ferguson, was overloaded with work, but also that its model was originally designed to tackle entirely different illnesses such as flu.
“The Imperial team are very good but these models were optimised for a different purpose which is influenza . . . everyone’s conscious of the fact that it has been rapidly converted from a different purpose and wasn’t originally designed for this type of virus and this type of transmission,” Cates said.
He added: “We need some alternative models because very big decisions are being made based on the [Imperial] models. And that doesn’t mean there’s anything wrong with the Imperial model. It’s just that you can’t have one model, which has in it every possible different set of assumptions.
“With only the one model you don’t know which bits of it you really can trust, and which bits of it are less reliable — because the assumptions in it may have been made years before, in the context of a different disease.”
Such concerns echo those previously raised by Gupta. She said in an interview: “I decided to publish and speak out because the response to this pandemic is having a huge effect on the lives of vulnerable people with a profound cost and it seems irresponsible that we should proceed without considering alternative models. Imperial has a long history of involvement with government and its epidemiological models can have huge importance and translational impact but it’s tricky to use them to forecast what’s going to happen. We need to also consider alternatives.”
Her comments may hint at personal tensions among academic disease modellers, numbering just a few hundred people who know each other and have often worked together — or competed for jobs and grants.
In some cases there is a lot of history. In 1999 Gupta was coming to the end of a five-year fellowship in Oxford’s zoology department and applied for a permanent post, winning the approval of six of the eight-strong selection panel.
One of those who opposed her application was Anderson, her boss at the time. He alleged to other panel members that Gupta, who had worked alongside him for many years, had only got the job because she was having a relationship with another member of the panel. This was untrue and Gupta lodged an official complaint. Anderson sent her a formal letter of retraction and apology. He quit Oxford — moving to Imperial with a team that included Ferguson.
Later, the Wellcome Trust’s Centre for the Epidemiology of Infectious Disease, one of Oxford’s most prestigious institutes, was quietly merged into the medical department.
Until those events Oxford had led the way in epidemiology. It was Anderson’s Oxford group, for example, which modelled the global spread of HIV in the 1980s — warning that it could claim millions of lives. “It was ridiculed by the public health community,” said Mark Woolhouse, professor of infectious disease epidemiology at Edinburgh, who was once a member of Anderson’s Oxford group. “But the Oxford model was right. It showed how mathematical models of diseases can offer insights that public health experts cannot.”
Policy-makers took note. Woolhouse was also working with Anderson when mad cow disease spread from cattle into humans in the 1980s and 1990s and the government asked Oxford to help calculate the scale of the infection. This led to the cull of 4.4 million cattle, which suppressed the disease.
By the time foot-and-mouth disease (FMD) struck in 2001, however, Anderson’s clash with Gupta had seen him move to Imperial. Ferguson, who had once worked closely with Gupta at Oxford, including co-authoring papers with her, left her behind and also moved to Imperial. Oxford was in effect sidelined and it was from Imperial that Ferguson and Anderson dominated the government response to foot and mouth.
That response, involving the slaughter of more than 11 million sheep and cattle at a cost of more than £8bn was based entirely on modelling and remains hugely controversial — with many believing the modellers got it wrong. They were modelling a fast-moving epidemic with little accurate data. A subsequent government inquiry was damning of the general approach and its conclusions may be relevant to the current crisis. It said: “The FMD epidemic in UK in 2001 was the first situation in which models were developed in the ‘heat’ of an epidemic and used to guide control policy . . . analyses of the field data, suggest that the culling policy may not have been necessary to control the epidemic, as was suggested by the models produced within the first month of the epidemic. If so it must be concluded that the models supporting this decision were inherently invalid.”
The Imperial modellers’ next big public challenge came eight years later when swine flu swept the world — fortunately killing few Britons because older people tended to be immune and younger ones were strong enough to fight it off. Britain was, however, left with 34 million doses of unused and expensive vaccines. Again there was an inquiry — which concluded that ministers had once again treated modellers as “astrologers”, asking them to provide detailed forecasts when they had too little data.
“Modelling did not provide early answers,” it concluded. “The major difficulty with producing accurate models was the lack of a relatively accurate idea of the total number of cases . . . This is not to reject the use of models, but to understand their limitations: modellers are not ‘court astrologers’.”
The same failure to gather data in this pandemic is highly likely to be part of the public inquiry that must surely follow. The most vital data still missing is the proportion of people who have already been infected — a number that would instantly make the modelling far more reliable, including telling us when the lockdown might end.
“If we had been testing I still think we would have ended up in some form of lockdown,” Ferguson said, “but it might have been a shorter period of time and maybe slightly less intense.”
In the absence of government testing data the modellers can only make predictions hedged with a high degree of uncertainty.
A paper published last week from a group at the London School of Hygiene and Tropical Medicine, led by Nick Davies, warned that lifting the lockdown after 12 weeks would be followed by a surge of cases with between 220,000 and 370,000 extra deaths.
On the other hand, it suggested, imposing repeated lockdowns off and on for the rest of the year, could reduce the number of deaths to 130,000 and perhaps as low as 54,000. All the numbers are bad — but they are also incredibly wide-ranging.
Other modellers have drawn similarly dire conclusions. One of them is Osnat Zaretsky of DataClue, a company that has helped Israel, which has seen only 40 deaths so far, draw up a response. He believes Britain’s modellers have grossly underestimated the pandemic and predicts that Britain will see 95,000 deaths by May 1, rising to 288,000 by late June.
“The numbers are extremely alarming —they are doubling every couple of days and this is what our projections are based on,” said Zaretsky, a UK-born Israeli whose research suggests that the UK is not even counting deaths accurately. “There seems to be a vacuum of reliable information in the UK. It’s apparent that many sick people or even ones that passed away showed Covid-19 symptoms but have never been tested. This creates a false sense that the curve and the spread is far lower than they really are. As soon as the UK ramps up testing we’ll see a sharp increase in diagnosed cases.”