Government scientists’ 50,000 Covid infections graph based on few hundred cases

[The problem of trying to get on top of events using the best evidence available – very pragmatic “science”- Owl]

The government’s estimate that infections were doubling every seven days was based largely on smaller-scale studies involving only a few hundred cases rather than test and trace, amid fears that failings in national community testing meant it was critically underestimating the spread.

Tom Whipple, Science Editor 

Sir Patrick Vallance and Chris Whitty, the chief scientific adviser and the chief medical adviser, said yesterday that at present rates of growth Britain could be looking at 50,000 cases a day by the middle of October.

The projection was based on an assumption that the number of infected people would double each week — a figure that appeared to contradict testing data.

Official figures show that it has taken a fortnight for the epidemic to grow from around 2,000 confirmed cases a day to 4,000.

Graham Medley, from the London School of Hygiene and Tropical Medicine, sits on the modelling committee for the government’s Scientific Advisory Group on Emergencies, Spi-M. He said that they had realised their best estimates of doubling times were out of date, and they became worried that the epidemic had gathered pace.

“The estimates from Spi-M are ten to 20 days’ doubling time, but these are largely based on data from two to three weeks ago,” he said. “The concern was that the more recent doubling time is shorter. There was also concern that the problems with testing meant that the data were not particularly reliable.”

A spokeswoman for Sir Patrick said that the seven-day estimate had instead been based heavily on the findings of the weekly survey of the Office for National Statistics, and a similar less-frequent survey called React-1, run by Imperial College London.

These studies both test a random sample of more than 100,000 people to track the progress of the virus. Because the virus is still at low levels, however, it involves making projections on the basis of small numbers of positive cases. In its latest study the React-1 team sampled 153,000 people and found 136 cases, the last on September 7.

On the basis of the change in the proportion of positives over the period they were sampling, they estimated a seven-day doubling time.

Steven Riley, from Imperial College, said that having several different sources of data is crucial, particularly if one is suffering from problems.

“The very well reported issues in the test and trace system mean that the proportion of infections that are picked up over time might not be constant,” he said.

“Studies like ONS and React are providing timely data, that is an alternative source to the test and trace data. There is a lot of value in having these parallel sources.” However, he acknowledged that there was an inherent uncertainty. “In the end it’s 136 positives. It’s the positives that give you the information. So it’s not perfect, and when you’re estimating from 136 observations you have to make sure you give an accurate sense of the uncertainty.”

Ewan Birney, deputy director general of the European Molecular Biology Laboratory, said that the nature of a pandemic made it inevitable that decisions were made on the basis of imperfect data.

“In this epidemic there is a lag until we start to see hospitalisation data and death data from infections. That is not a fault of measurement; it’s biology. There’s no way of improving it.

“There are a variety of sources that the government will use to show that now is the time for action,” he said.

“Imperial’s React study and the ONS study are both really good.”

He said that despite their low number of positive cases, because they use random community sampling, rather than relying on people to volunteer, they would be crucial pieces of information even if community testing was working perfectly.

“It is critical to have as unbiased data as possible. Big numbers won’t solve your bias problem — that’s why we have these studies.”