Limits...
Accuracy of models for the 2001 foot-and-mouth epidemic.

Tildesley MJ, Deardon R, Savill NJ, Bessell PR, Brooks SP, Woolhouse ME, Grenfell BT, Keeling MJ - Proc. Biol. Sci. (2008)

Bottom Line: These claims are generally based on a comparison between model results and epidemic data at fairly coarse spatio-temporal resolution.By contrast, while the accuracy of predicting culls is higher (20-30%), this is lower than expected from the comparison between model epidemics.These results generally support the contention that the type of the model used in 2001 was a reliable representation of the epidemic process, but highlight the difficulties of predicting the complex human response, in terms of control strategies to the perceived epidemic risk.

View Article: PubMed Central - PubMed

Affiliation: Department of Biological Sciences and Mathematics Institute, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, UK. m.j.tildesley@warwick.ac.uk

ABSTRACT
Since 2001 models of the spread of foot-and-mouth disease, supported by the data from the UK epidemic, have been expounded as some of the best examples of problem-driven epidemic models. These claims are generally based on a comparison between model results and epidemic data at fairly coarse spatio-temporal resolution. Here, we focus on a comparison between model and data at the individual farm level, assessing the potential of the model to predict the infectious status of farms in both the short and long terms. Although the accuracy with which the model predicts farms reporting infection is between 5 and 15%, these low levels are attributable to the expected level of variation between epidemics, and are comparable to the agreement between two independent model simulations. By contrast, while the accuracy of predicting culls is higher (20-30%), this is lower than expected from the comparison between model epidemics. These results generally support the contention that the type of the model used in 2001 was a reliable representation of the epidemic process, but highlight the difficulties of predicting the complex human response, in terms of control strategies to the perceived epidemic risk.

Show MeSH

Related in: MedlinePlus

Graph showing the log likelihood of correctly predicting the status of all farms in a one-week interval for varying start dates. Likelihoods are calculated independently for each farm, from the results of multiple stochastic simulations. Farms are defined as being in the correct class if they are infected or culled (or simply remain susceptible) in both the model and the 2001 data in a given one-week prediction interval. The inset shows the log likelihood against the total number of reported and culled farms for each starting point of the simulations—we note that the log likelihood increases linearly with the number of reported and culled farms.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2376304&req=5

fig1: Graph showing the log likelihood of correctly predicting the status of all farms in a one-week interval for varying start dates. Likelihoods are calculated independently for each farm, from the results of multiple stochastic simulations. Farms are defined as being in the correct class if they are infected or culled (or simply remain susceptible) in both the model and the 2001 data in a given one-week prediction interval. The inset shows the log likelihood against the total number of reported and culled farms for each starting point of the simulations—we note that the log likelihood increases linearly with the number of reported and culled farms.

Mentions: Figure 1 shows the log likelihood of correctly predicting the status of all the UK farms in short (one week) simulations, for varying start dates. The start date varies in weekly increments and simulations run forward for a period of one week after which time the model and 2001 data are compared. We note that the log likelihood scales linearly with the number of reported and culled farms (see the inset), suggesting a consistent probability of correctly identifying the status of each farm throughout the epidemic. In general, however, these log likelihood values are strongly influenced by the few cases or culls in each week, which occur with extremely low probability. These farms are often small or at some distance from the prevailing epidemic. While such likelihood methods are undoubtedly very powerful tools for giving a comprehensive measure of the global accuracy of a model, we now examine a range of more simplistic measures. In particular, we focus on the average proportion of cases and culls which can be correctly identified by simulations of various lengths for a range of initial starting dates.


Accuracy of models for the 2001 foot-and-mouth epidemic.

Tildesley MJ, Deardon R, Savill NJ, Bessell PR, Brooks SP, Woolhouse ME, Grenfell BT, Keeling MJ - Proc. Biol. Sci. (2008)

Graph showing the log likelihood of correctly predicting the status of all farms in a one-week interval for varying start dates. Likelihoods are calculated independently for each farm, from the results of multiple stochastic simulations. Farms are defined as being in the correct class if they are infected or culled (or simply remain susceptible) in both the model and the 2001 data in a given one-week prediction interval. The inset shows the log likelihood against the total number of reported and culled farms for each starting point of the simulations—we note that the log likelihood increases linearly with the number of reported and culled farms.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2376304&req=5

fig1: Graph showing the log likelihood of correctly predicting the status of all farms in a one-week interval for varying start dates. Likelihoods are calculated independently for each farm, from the results of multiple stochastic simulations. Farms are defined as being in the correct class if they are infected or culled (or simply remain susceptible) in both the model and the 2001 data in a given one-week prediction interval. The inset shows the log likelihood against the total number of reported and culled farms for each starting point of the simulations—we note that the log likelihood increases linearly with the number of reported and culled farms.
Mentions: Figure 1 shows the log likelihood of correctly predicting the status of all the UK farms in short (one week) simulations, for varying start dates. The start date varies in weekly increments and simulations run forward for a period of one week after which time the model and 2001 data are compared. We note that the log likelihood scales linearly with the number of reported and culled farms (see the inset), suggesting a consistent probability of correctly identifying the status of each farm throughout the epidemic. In general, however, these log likelihood values are strongly influenced by the few cases or culls in each week, which occur with extremely low probability. These farms are often small or at some distance from the prevailing epidemic. While such likelihood methods are undoubtedly very powerful tools for giving a comprehensive measure of the global accuracy of a model, we now examine a range of more simplistic measures. In particular, we focus on the average proportion of cases and culls which can be correctly identified by simulations of various lengths for a range of initial starting dates.

Bottom Line: These claims are generally based on a comparison between model results and epidemic data at fairly coarse spatio-temporal resolution.By contrast, while the accuracy of predicting culls is higher (20-30%), this is lower than expected from the comparison between model epidemics.These results generally support the contention that the type of the model used in 2001 was a reliable representation of the epidemic process, but highlight the difficulties of predicting the complex human response, in terms of control strategies to the perceived epidemic risk.

View Article: PubMed Central - PubMed

Affiliation: Department of Biological Sciences and Mathematics Institute, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, UK. m.j.tildesley@warwick.ac.uk

ABSTRACT
Since 2001 models of the spread of foot-and-mouth disease, supported by the data from the UK epidemic, have been expounded as some of the best examples of problem-driven epidemic models. These claims are generally based on a comparison between model results and epidemic data at fairly coarse spatio-temporal resolution. Here, we focus on a comparison between model and data at the individual farm level, assessing the potential of the model to predict the infectious status of farms in both the short and long terms. Although the accuracy with which the model predicts farms reporting infection is between 5 and 15%, these low levels are attributable to the expected level of variation between epidemics, and are comparable to the agreement between two independent model simulations. By contrast, while the accuracy of predicting culls is higher (20-30%), this is lower than expected from the comparison between model epidemics. These results generally support the contention that the type of the model used in 2001 was a reliable representation of the epidemic process, but highlight the difficulties of predicting the complex human response, in terms of control strategies to the perceived epidemic risk.

Show MeSH
Related in: MedlinePlus