The promises and pitfalls of applying computational models to neurological and psychiatric disorders
Computational models are applied increasingly to the study of brain function and dysfunction. Teufel & Fletcher highlight the promise of this approach, but also some of the problems that can arise from the misapplication of such models. Using a simple analogy, they identify key principles necessary for avoiding common pitfalls.
No MeSH data available.
© Copyright Policy
aww209-F1: The importance of specifying both the purpose of a model, and the mapping between the model components and aspects of the ‘real world’. Here, we illustrate the importance of two principles of modelling using the running analogy from Box 1. It is tempting, when we have identified a reliable measure, one that seems to capture the essence of what we are trying to characterize and that has predictive value, to make it generally applicable. We may forget or ignore both the elements of the world that we are not including in our model and, furthermore, the ways in which the components that we are modelling may map to the real world. The figure illustrates how we might use ‘Running Economy’ (the volume of oxygen consumed at steady state running) as described in Box 1 as a predictor of race performance. Having the figures for oxygen consumption on two athletes allows a direct comparison of how they are likely to fare in competition. Indeed, even if the measurements were obtained at a range of speeds and on different terrain, the measures may be comparable. Running economy serves as a good model of running ability. But, as indicated in the figure, it only fulfils this purpose within predefined constraints. For example, it is standard to measure running economy at a pace that is below that at which a person is near their lactate threshold (the point at which oxygen consumption cannot keep up with demand) and therefore the model is inadequate if we wish to judge running ability for shorter distance sprints in which lactate levels are highly relevant (Billat, 1996). Here the mapping between the model and reality has gone wrong because the model is applied for a setting where its assumptions are no longer relevant or valid. Unless we are explicit about our mapping between model components and reality and the working assumptions that justify it, our model will ultimately lead us astray. Such error lies not in the model but in how it is used. It should also be noted that running economy, even when applied under restricted and appropriate circumstances, emerges from a number of complex, interacting factors that are individually ignored, though each contributes to the overall measure (indicated by straight black arrows). This is not a problem when running economy is being used for the purpose of predicting running performance but it may become very important if we are using our model for a different purpose, for example to decide on the best training regimen to improve performance. At this point, the value of the model is much more influenced by the factors that were previously only an implicit part of the model. Two athletes may have limitations in their running economy for very different reasons, one due to biomechanical inefficiency arising from flaws in her posture or stride rate, another due to cardiovascular inefficiency. Each would be rectified by distinct training regimens (as indicated by blue, dotted arrows), which could not be chosen merely on the basis of running economy. It is the purpose of the model that specifies which components of reality need to be modelled and in how much detail.
Below, we discuss three consequences of using computational models that we believe are most relevant for clinical neuroscience, and illustrate these with a simple analogy in Box 1 and Fig. 1.Box 1