Webster defines "forecast" as:
a : to calculate or predict (some future event or condition) usually as a result of study and analysis of available pertinent data; especially : to predict (weather conditions) on the basis of correlated meteorological observations
We are simple people who like simple definitions. And we simply provide information about disease activity before it happens, followed by information in the form of advisories basically validating what we expressed beforehand.
Over the last couple of months we updated our forecast validation process, where we examined forecasts issued for 77 infectious diseases reported by authorities in the UK, Germany, Hong Kong and PRC, Singapore, Taiwan, and USA. We had issued forecasts with 30- and 60-day pre-event windows. Subsequent validation statistics were as follows for 275 observation points: an overall Critical Success Index (CSI) of 93% (false alarm rate of 7%).
... And inside the United States we have been forecasting approximately 200 infectious diseases at the local level with clinical validation.
There is apparent disagreement from academia, however. Apparently, the National Infectious Disease Forecast Center does not exist:
Disease Prediction Models and Operational Readiness
PLoS ONE Published: March 19, 2014DOI: 10.1371/journal.pone.0091989
By Courtney D. Corley, Laura L. Pullum, David M. Hartley, Corey Benedum, Christine Noonan, Peter M. Rabinowitz, Mary J. Lancaster.
The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness Level definitions.
Here the authors assert the following:
- There also is a true lack of implementation of such models in routine surveillance and control activities; as a result there is not an active effort to build and improve capacity for such model implementation in the future.
- Multiple searches were conducted in bibliographic databases covering the broad areas of medicine, physical and life sciences, the physical environment, government and security. There were no restrictions placed on publication date or language of publication. Abstracts and citations of journal articles, books, books in a series, book sections or chapters, edited books, theses and dissertations, conference proceedings and abstracts, and technical reports containing the keywords and phrases were reviewed. The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. The databases queried resulted in 12,152 citations being collected. Irrelevant citations on the topic of sexually transmitted diseases, cancer and diabetes were retrieved. We de-duplicated and removed extraneous studies resulting in a collection of 6,503 publications. We also collected 13,767 web documents based on Google queries, often referred to as Google harvesting. We down selected the web documents for theses and dissertations, reducing this number to 21.
- In the majority of the models we examined, few aspects of V&V were reported. Although many models underwent some level of V&V, few if any demonstrated validation, and thus readiness, in a general sense that would find credibility with operational users.
We find this paper so very interesting given the following:
- The existence of a fully operational forecast center inside the United States was not acknowledged that has been operational for several years now.
- The existence of a Haiti-based forecast station that played a key role in saving Haitian lives in the context of the cholera disaster as well as play a direct role in determining attribution was not acknowledged.
- Several rounds of successful influenza forecasts of significance to this country- not acknowledged.
- Several hundred disease-location-time period triads openly validated for the public's review now- not acknowledged.
We are fortunate that we are the ones who determine operational relevance in our environment. Good thing 200,000 practicing physicians here in the United States depend on our forecasts.
We raise the uncomfortable question of whether this study indeed reflects an accurate picture for those who funded the study. While it is understandable that the academic research community would wish to perpetuate a false impression that operational forecasting does not exist... and therefore requires millions more in funding to develop... here we might point out the rather large and substantial elephant in the room. That operations in this space indeed:
- Is being validated (and never will stop)
- Is at a TRL 9 in the commercial environment, with paying clients who depend on our information
- And continues to grow at a steady pace
But it would seem the US government has also bought into some degree of this, advising us today:
"[Your capability] duplicates inherent functionally currently provided by the [redacted US government capability]."
If this is true, we might suggest the US government makes this visible to every US physician in America, as we have in our routine communications with our country's largest physician networks. Because certainly, we would want such capabilities (paid for by our tax dollars) to protect us and our healthcare providers.
Indeed, why waste our time then here at the National Infectious Disease Forecast Center?