Wednesday, January 11, 2012

Are Spatiotemporal Methods Useful for Early Detection?


To answer the question in the title, let us start with the quotation from Fricker [1]: “Returning to the original question of whether statistical methods are useful for early event detection, I suggest that we really don’t know yet. That is, whether the systems and their associated detection algorithms can be modified so that they appropriately minimize false positive signals while maintaining sufficient sensitivity to actual outbreaks is still an open question”. It was also cited in our post “Biosurveillance in Crisis” of December 2011. This quotation is related to the purely temporal approach that is implemented in most current syndromic surveillance systems, by using statistical process control (SPC) methods. The temporal, univariate methodology is well-developed, widely used and technically much simpler than all existing spatiotemporal approaches. Actually, spatiotemporal  surveillance is a generalization of purely temporal one and as such it inherits all the challenges of the latter. In addition, spatiotemporal methods have both theoretical and practical challenges of their own. Therefore, whether statistical methods are useful for early event detection within spatiotemporal biosurveillance still is an open question even to the greater extent, than for temporal surveillance. Thus, as to early detection, spatiotemporal methods are unlikely to provide any advantages over temporal ones.

We have come to the above conclusion merely by comparison. The more important argumentation is as follows. In November 2011, the CDC overhauled its nationwide biosurveillance program, Biosense (see [2]). One of the most important components of this overhaul is giving more power and initiative to local jurisdictions - now  they have ownership of the data and the earliest checking of it.  “Local and state health departments have the best relationship with providers. They understand the context in which an event has happened, and they understand their population more than anybody else. So if we can make sure they have ownership of that data and the initial vetting of it is there, that would be the basis to truly start stitching a regional and national picture,” said Taha Kass-Hout, the CDC’s deputy director for information science and program manager for BioSense. Also Kass-Hout said: “BioSense will help the community ‘open for business’. That is, any health department in the country could ask their providers to share healthcare information with them in a meaningful ready to use environment. That will remove a lot of the barriers from the providers as well as the health departments.”

With this new data-sharing approach, in which local health departments – not the CDC – maintain ownership of their data in the form of daily counts of visits to local providers: emergency departments of city hospitals, affiliated clinics and doctor offices; and a new understanding of who is responsible for early analysis of this data and eventually for early decision making and response; it becomes clear that this new Biosense environment is ideal for purely temporal approaches applied at a very local, city level.

Now, let us briefly describe how a typical and one of the most commonly used spatiotemporal method, SaTScan works:

1.      A global region designated for surveillance (it could be the whole country or just a large geographical region, etc.) is subdivided into sub-regions.

2.      For each sub-region and a specified syndrome, the data are collected, typically in the form of visit counts during some baseline period comprising some most recent days.

3.      Then SaTScan searches for statistically significant clusters by comparing these counts in a certain geographical area with its neighboring areas. The algorithm is based on computing a likelihood ratio-based scan statistic and is using randomization to obtain p values (see, for example [3])

4.      See more details about SaTScan, its practical problems such as performance evaluation, computational time, etc, in [3] - [5].

Without considering any technicality, it is easy to see that SaTScan is a global method by design and by implementation, and as such it is left beyond the redesigned Biosense program, with its emphasis on local data ownership, local early analysis and local decision making and response. As says Burkom in [6]:   “I entered a recent project anticipating an application of scan statistics, but in the course of requirements and data analysis and give-and-take among the lead epidemiologist, implementers, and developers, we adopted a solution based on Bonferroni-limited multiple adaptive control charts”.

Thus, it has to be acknowledged that SaTScan as a typical spatiotemporal biosurveillance method can hardly be useful for early outbreak detection. As to situational awareness, it has to be based on some ability to predict the future development of the outbreak, which in turn should be based on some theoretical, epidemiological model. Since there is no any epidemiological component in the SaTScan methodology, it is unlikely that this approach could be helpful for situational awareness either. Most probably, SaTScan can be more successful in the static situations or slowly developing processes such as geographical distribution of cancer, diabetes, liver diseases etc., and also in non-health related applications: history, astronomy, demography among others (for more details, see [7]).

References

[1] Fricker, R. D. (2011a). Some methodological issues in biosurveillance. Statistics in Medicine, [full text]
[2] Goth, G. (2011). A new age of biosurveillance is upon us. http://www.govhealthit.com/news/new-age-biosurveillance-upon-us?page=0,1
[3] Shmueli, G. and Burkom, H. S. (2010). Statistical challenges facing early outbreak detection in biosurveillance. Technometrics, 52(1), pp. 39-51.                                                                                                            
[4] Fricker, R. D. (2010). Biosurveillance: detecting, tracking, and mitigating the
effects of natural disease and bioterrorism
[5] Fraker, S. E. (2007). Evaluation of Scan Methods Used in the Monitoring of Public Health Surveillance Data (Dissertation) http://scholar.lib.vt.edu/theses/available/etd-11092007-111843/unrestricted/SEF-EDT.pdf  
[6] Burkom, H. S. (2011). Comments on ‘Some methodological issues in biosurveillance’ Statistics.in Medicine. 2011,30 pp. 426—429                                    
 [7] SaTScan Bibliography (2011). http://www.satscan.org/references.html

Sunday, January 1, 2012

Epidemiological Significance vs. Statistical Significance



In Fricker (2011a), the author asks whether statistical methods are useful for early event detection and his suggestion is that he really does not know yet. Why so? First of all, because of the sequential nature of early detection, such fundamental concepts as significance level, power, specificity, and sensitivity cannot be used directly, without nontrivial modification. They are useful only for a fixed sample (Fricker (2011b). Secondly, biosurveillance data are usually autocorrelated, and even if such autocorrelation can be removed via modeling, the signaling statistics for early detection methods that use historical data in a moving baseline, are still strongly autocorrelated. As a result, again it is difficult to interpret specificity and sensitivity. Our approach to early detection is fundamentally different from the conventional ones. The mainstream approaches are based on removing autocorrelation from time series of daily counts by using some ad hoc regression methods, and then applying Statistical Process Control (SPC) charts to the residuals in regressions. Note that SPC charts were originally designed to work with uncorrelated data. Actually, the mainstream biosurveillance community considers autocorrelation as a nuisance. On the contrary, in our   approach autocorrelation is a major player: our only key parameter is the first-order autocorrelation coefficient, which  is related in a very simple way to the major epidemiological parameters, such as infection and recovery rates, and basic reproduction ratio R0 (see [3] and also our previous post “Epidemiological Surveillance: How It Works”).
.          Since statistical inference methods for AR(1), including  parameter estimating, confidence Interval constructing and hypothesis testing are well-developed and easily available, it would seem that they could successfully applied to the problem of early detection and early situational awareness, but it is not the case.

Note also that in mainstream approaches, early detection and situational awareness are to some extent disconnected from each other, they are considered absolutely separate problems. And even if we have detected outbreak, for situational awareness we have to start from scratch since we have no information about further development of the outbreak. In our approach, we estimate only one parameter, the first-order autoregression coefficient in AR(1) approximation of SIR model, and we are able not only to decide whether the outbreak has already started, but also to get preliminary estimates of what we need for effective response and consequence management. 

           To the criticism expressed in [1], [2] and [4] regarding usefulness of such fundamental statistical concepts as statistical significance, p-value, sensitivity, specificity, etc for early detection, we can add some skepticism of our own. It is shown in [3] that both confidence intervals and hypothesis testing at 0.05 or 0.10 significance level are impractical for early detection purposes if we work with a typical sample size (baseline) of 7 – 14 days. For example, a hypothetical influenza epidemic as strong as Spanish flu cannot be detected in 7 – 14 days at 0.05 or 0.10 significance level. It is not a surprise because statistical significance depends mostly on the sample size: in very large samples, even very small effects will be significant, whereas in very small samples very large effects still cannot be considered significant. See for instance data borrowed from Table 13 in a classical book of statistical tables [5] with some linear interpolation
     
Critical Values of Correlation Coefficient r
 for Rejecting the Null Hypothesis (r= 0)
at the .05 Level Given Sample Size n
                        ______________________________________________
                               n                                                                   r
                        ______________________________________________
               
                               5                                                                 0.878.
                               7                                                                 0.755 (interpolated)
                             10                                                                 0.632
                             15                                                                 0.538 (interpolated)
                             20                                                                 0.444
                              50                                                                 0.276               
                             .……………………………………………………….
                      10,000                                                                0.0196 

           According to a rule of thumb (see [6]), r = 0.5 is considered a large effect, but  still it cannot be distinguished from null hypothesis r = 0.0 with sample size n = 15 at significance level of 0.05 since critical level is 0.538. At the same time, a negligible correlation r = 0.02 is statistically significant with n = 10,000.

Thus, the early detection goal cannot be achieved with such a small sample size as 7 – 14 days at any acceptable significance level. Instead, we propose to use the concept of practical, epidemiological, significance. Actually, what really matters is estimating the magnitude of effects, not testing whether they are zero. In our case, the effect is assessed by the parameter R0, the basic reproductive ratio for the SIR model, and related to R0 the first-order autoregression coefficient in AR(1) approximation of the SIR model. In [3] it has been proposed  the following early detection-combined-early situational awareness strategy:

(1)   Every day we estimate the first-order autoregression coefficient based on the moving baseline (from 7-day to 14-day);
(2)   With a very simple relationship between the autoregression coefficient and R0, we actually estimate R0 (below we use the same notation for the parameter R0 and its estimate);
(3)   Then we compare the latter estimate with the known critical values for seasonal influenza (1.5 ≤ R0 ≤ 3.0) and for Spanish Flu pandemic (3.0 ≤ R0 ≤ 4.0);
(4)   Even R0 ≈ 1 is worth of some field investigations;
If R0 ≥ 1.5 then it is epidemiologically reasonable to report our findings as a significant risk of the epidemic;
If R0 ≥ 3.0 then it is epidemiologically reasonable to report a severe risk.     
(5)   Knowledge of R0 provides us with preliminary estimates of the number of
      infected at the epidemic peak and the total number of infected over the
      course of the outbreak.

Our critical levels (thresholds) have a very clear epidemiological meaning as opposed to rather arbitrary thresholds in the mainstream biosurveillance.

References  

[1] Fricker, R. D. (2011a). Some methodological issues in biosurveillance. Statistics in Medicine, [full text]
[2] Fricker, R. D. (2011b). Rejoinder: Some methodological issues in biosurveillance. Statistics in Medicine, [full text]  
[3] Shtatland, E. and Shtatland, T. (2011). Statistical approach to biosurveillance in crisis: what is next. NESUG Proceedings, [full text]                                                                   
 [4] Shmueli, G. and Burkom, H. S. (2010). Statistical challenges facing early outbreak detection in biosurveillance. Technometrics, 52(1), pp. 39-51.                                                                                                             
[5] Pearson, E. S. and Hartley, H. O. (Eds.). (1962).  Biometrika tables for statisticians (2nd ed.).  Cambridge, MA: Cambridge University Press.                       
 [6] Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.