Dialogue Between a Meteorologist [M] and an Operational Biosurveillance Analyst [OBA]:
[M] A forecast is a prediction of an event before it begins. If the event has begun (e.g. a hurricane) at a given location, updates are called "nowcasts." For example, the initial wind speed forecast for hurricane Sandy at Battery Park might have been 60 mph. If the storm has started and the forecast is raised to 70 mph, that would be a nowcast.
[OBA] There are many in public health who claim forecasting cannot be done. This is simply not true for a wide variety of infectious disease conditions. Indeed we are discovering there are a multitude of health outcomes only peripherally related to infectious disease that can be forecasted as well. Among those who are practicing physicians, this has been long captured in the classic discourse between physician and patient. For example, "don't smoke or you may develop lung cancer". The meteorological notion of a "nowcast" has been done routinely now for several years at the National Infectious Disease Forecast Center as well as some of our other engagements overseas such as Haiti. "Nowcasts" are also performed by public health, however at nowhere near the depth of coverage as we are able to provide now via direct linkages to clinical medical data (which is a major source of public health advisories). Public health advisories are but the barest subset of what is observed on a daily basis in clinical medicine and that is relevant to the Public Interest.
[M] Climatology represents the long term averages (in meteorology, usually 30 years or more). Climatology tells us that, in NYC, January, on the whole, will be colder than July. Persistence is the tendency of the weather of tomorrow to be like the weather of today. A forecast is considered "skillful" if it beats both "climatology" and "persistence". If it is sunny and 75° today and exactly the same thing occurs tomorrow, even a perfect forecast is not "skillful" because there was no change in the weather.
[OBA] Excellent point. However what is considered state of the art in meteorology is not so in operational biosurveillance. Indeed, the vast majority of healthcare providers operate in reactive mode, simply addressing disease one patient at a time and deciding whether or not to treat with antibiotics. With the exception of influenza, most physicians have no idea what their local baselines are for rhinovirus, metapneumovirus, Salmonella, etc etc etc. In other words, a lack of comprehensive understanding of one's own local baseline. This unfortunately extends to local antibiotic resistance patterns, where we are clearly losing several important battles with resistance in our communities.
So, current state of the art in operational biosurveillance is a few exceptions representative of "skilled" forecasts (e.g. influenza vaccine drift) and a greater majority of "unskilled" forecasts. However what is considered "unskilled" from the strict definition of meteorology is considered utterly "skilled" by your average healthcare provider and the public because they are unaware of their baselines in the first place.[M] We have previously discussed the concept of adopting the Watch / Warning terminology used by meteorology in the event a forecastable major adverse health event presents itself. If I recall correctly, watches by the National Weather Service are issued at longer time periods (~ two days out) with a 30% threshold. In other words, if the forecasters believes there is a one-in-three chance of an occurrence, a Watch is issued. If, as the start of the event grows closer, a Warning is issued when the confidence reaches 70% at roughly 24 hours out.
An issue we will have to consider as our forecasts are adopted on a wide basis is, to what extent did our forecast cause a different outcome than what would have occurred naturally? Triple-digit tornado death tolls were relatively common prior to the creation of the tornado warning program. They are exceedingly rare now (even with a large population) because of the warning system. The forecasts have changed the outcomes (fewer deaths than would exist without the warning system).
Assume we issue an "Influenza Watch." Our watch then prompts a proactive and effective vaccination program. Our watch may have the effect of surpressing the number of cases needed to "verify" (another word would be "validate") our forecast. In other words, if we forecast 10% of the population will contract influenza and there is a subsequent vaccination program, and the actual number is turns out to be 6% was it because we made an inaccurate forecast or was it because of the vaccination program?
[OBA] This is a crucial point. Anecdotally, we have observed patients flocking to our pediatric practice requesting influenza vaccination because 1) we issued advisories to them months in advance of the forecasted drift season; 2) the bulk of our patient families support influenza vaccination; and 3) the community is small enough where "everyone knows each other" and therefore share stories of children who were unvaccinated and later hospitalized with influenza. So, the pre-event forecast (i.e. well ahead of the first confirmed influenza case in the county) began the community orientation process, followed by discussion after discussion family by family in the clinic as they asked about the forecast, followed by that single case of a hospitalized unvaccinated child to show the "proof of threat" as well as "proof of preventive action". We then issued "nowcasts" updating the public as to what we were seeing in the hospital: that the hospitalized were almost universally unvaccinated. We observed several of the "vaccine refusers" reconsider their position in the context of this powerful local community influence.
All of the above said, operational validation does not equate to academic validation. And we readily acknowledge the question of validation to be a tremendous challenge when prognosticating disease. Conversely, we have often pointed out that academic studies and peer-review does not often transition into the reality of daily clinical medicine. There is tremendous challenge here.
Only by building years of operational experience may we have the necessary data to address this challenge.