Sanjay Gupta and CNN have recently highlighted Columbia University's influenza prediction system. Gupta, in his interview, also notes Google Flu Trends. But neither of these capabilities are validating very well. Here is the screenshot of the historial forecasts for Atlanta, the home of CNN:
Now, before we begin to probe this figure, we want to acknowledge up-front that Dr. Shaman and Columbia University is posting this openly for review by the public. They should be commended for their complete transparency.
But CNN, if they use this information in their broadcasts, need to be sure their viewers note the following:
- None of the forecasts issued were accurate (note the lighter shaded epi curves issued at the different time points).
- Estimated accuracy (see figure) at the level of epidemiological week is currently 2/3. Meaning, there is a current 2/3 chance epidemic activity in Atlanta will peak during Week 53.
- Estimated accuracy at the level of a daily forecast is just above that of a coin toss, meaning the forecast is not worthwhile from an operations perspective.
But so what?
As a mechanism of stimulating public engagement in health education, one might argue that such technologies are extremely helpful. Questions of accuracy and precision are not necessarily meaningful so long as the public is stimulated to consider influenza vaccination. This would be a good outcome and helpful to the country's public health.
But forecasts also carry responsibility to guide action in a manner that educates the operational users in the caveats of reliability. Take, for example, CDC's September forecast of Ebola cases reaching between 550,000 to 1.4 million cases for Sierra Leone and Liberia by January, which did not validate. Luckily, these forecasts were updated on 10 December in Time magazine.
But we are talking about a more than a 30-fold difference between what was forecasted and what actually was observed, according to the World Health Organization. We arrive at this number by dividing the low end of CDC’s original forecast of 550,000 cases for both Sierra Leone and Liberia by the sum of cases reported for both Sierra Leone and Liberia, which is 17,464 as of 31 December. It is notable that according to the Time magazine, the 10 December CDC forecast indicates we will see 53,000 in Sierra Leone and Liberia by 20 January. It will be interesting to see if this projection is validated, which is essentially a forecasted 3-fold increase of cases from where we are as of 31 December over the next 20 days.
While the grim forecast was always presented as the worst case scenario, looking at predictions by country can provide a metric of the impact of intervention. In both Liberia and Sierra Leone, latest reports from the World Health Organization reveal different outcomes than expected.
We propose a 30-fold difference in forecast versus observed is less likely the result of the international response effort and more likely an issue with the initial model inputs and assumptions.
Here is a forecast on this season’s influenza activity, which was shared with 300,000 of America’s practicing physicians. Note that additional information was shared internally with this social network in September. This was a warning of potential to see a vaccine mismatch occur with influenza type A/H3N2. This critical piece of epidemic intelligence led to a robust conversation in the ensuing months where anticipatory information was shared such as drawing comparisons of the current season’s activity to the non-vaccine mismatch season of 2012-13. These outputs of anticipatory information have subsequently validated. Most importantly, it was the epidemic intelligence that provided the stimulus for intellectual engagement among the physicians.
Here we are highlighting the differences between qualitative forecasts issued by an experienced analyst-physician versus quantitative forecasts. The key for moving productively forward is integration of the two in a validated and meaningful way that truly influences patient behavior for the betterment of public health.