Limits...
Hierarchical models in the brain.

Friston K - PLoS Comput. Biol. (2008)

Bottom Line: This means that a single model and optimisation scheme can be used to invert a wide range of models.We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data.We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

View Article: PubMed Central - PubMed

Affiliation: The Wellcome Trust Centre of Neuroimaging, University College London, London, United Kingdom. k.friston@fil.ion.ucl.ac.uk

ABSTRACT
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Show MeSH

Related in: MedlinePlus

The predictions and conditional densities on the states andparameters of the linear convolution model of the previous figure.Each row corresponds to a level, with causes on the left and hiddenstates on the right. In this case, the model has just two levels.The first (upper left) panel shows the predicted response and theerror on this response (their sum corresponds to the observed data).For the hidden states (upper right) and causes (lower left) theconditional mode is depicted by a coloured line and the90% conditional confidence intervals by the grey area.These are sometimes referred to as “tubes”.Finally, the grey lines depict the true values used to generate theresponse. Here, we estimated the hyperparameters, parameters and thestates. This is an example of triple estimation, where we are tryingto infer the states of the system as well as the parametersgoverning its causal architecture. The hyperparameters correspond tothe precision of random fluctuations in the response and the hiddenstates. The free parameters correspond to a single parameter fromthe state equation and one from the observer equation that governthe dynamics of the hidden states and response, respectively. It canbe seen that the true value of the causal state lies within the90% confidence interval and that we could infer withsubstantial confidence that the cause was non-zero, when it occurs.Similarly, the true parameter values lie within fairly tightconfidence intervals (red bars in the lower right).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2570625&req=5

pcbi-1000211-g006: The predictions and conditional densities on the states andparameters of the linear convolution model of the previous figure.Each row corresponds to a level, with causes on the left and hiddenstates on the right. In this case, the model has just two levels.The first (upper left) panel shows the predicted response and theerror on this response (their sum corresponds to the observed data).For the hidden states (upper right) and causes (lower left) theconditional mode is depicted by a coloured line and the90% conditional confidence intervals by the grey area.These are sometimes referred to as “tubes”.Finally, the grey lines depict the true values used to generate theresponse. Here, we estimated the hyperparameters, parameters and thestates. This is an example of triple estimation, where we are tryingto infer the states of the system as well as the parametersgoverning its causal architecture. The hyperparameters correspond tothe precision of random fluctuations in the response and the hiddenstates. The free parameters correspond to a single parameter fromthe state equation and one from the observer equation that governthe dynamics of the hidden states and response, respectively. It canbe seen that the true value of the causal state lies within the90% confidence interval and that we could infer withsubstantial confidence that the cause was non-zero, when it occurs.Similarly, the true parameter values lie within fairly tightconfidence intervals (red bars in the lower right).

Mentions: Figure 6 summarises theresults after convergence of DEM (about sixteen iterationsusing an embedding order ofn = 6, with a roughnesshyperparameter,γ = 4). Each rowcorresponds to a level in the model, with causes on the left and hiddenstates on the right. The first (upper left) panel shows the predictedresponse and the error on this response. For the hidden states (upper right)and causes (lower left) the conditional mode is depicted by a coloured lineand the 90% conditional confidence intervals by the grey area. Itcan be seen that there is a pleasing correspondence between the conditionalmean and veridical states (grey lines). Furthermore, the true values lielargely within the 90% confidence intervals; similarly for theparameters. This example illustrates the recovery of states, parameters andhyperparameters from observed time-series, given just the form of a model.


Hierarchical models in the brain.

Friston K - PLoS Comput. Biol. (2008)

The predictions and conditional densities on the states andparameters of the linear convolution model of the previous figure.Each row corresponds to a level, with causes on the left and hiddenstates on the right. In this case, the model has just two levels.The first (upper left) panel shows the predicted response and theerror on this response (their sum corresponds to the observed data).For the hidden states (upper right) and causes (lower left) theconditional mode is depicted by a coloured line and the90% conditional confidence intervals by the grey area.These are sometimes referred to as “tubes”.Finally, the grey lines depict the true values used to generate theresponse. Here, we estimated the hyperparameters, parameters and thestates. This is an example of triple estimation, where we are tryingto infer the states of the system as well as the parametersgoverning its causal architecture. The hyperparameters correspond tothe precision of random fluctuations in the response and the hiddenstates. The free parameters correspond to a single parameter fromthe state equation and one from the observer equation that governthe dynamics of the hidden states and response, respectively. It canbe seen that the true value of the causal state lies within the90% confidence interval and that we could infer withsubstantial confidence that the cause was non-zero, when it occurs.Similarly, the true parameter values lie within fairly tightconfidence intervals (red bars in the lower right).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2570625&req=5

pcbi-1000211-g006: The predictions and conditional densities on the states andparameters of the linear convolution model of the previous figure.Each row corresponds to a level, with causes on the left and hiddenstates on the right. In this case, the model has just two levels.The first (upper left) panel shows the predicted response and theerror on this response (their sum corresponds to the observed data).For the hidden states (upper right) and causes (lower left) theconditional mode is depicted by a coloured line and the90% conditional confidence intervals by the grey area.These are sometimes referred to as “tubes”.Finally, the grey lines depict the true values used to generate theresponse. Here, we estimated the hyperparameters, parameters and thestates. This is an example of triple estimation, where we are tryingto infer the states of the system as well as the parametersgoverning its causal architecture. The hyperparameters correspond tothe precision of random fluctuations in the response and the hiddenstates. The free parameters correspond to a single parameter fromthe state equation and one from the observer equation that governthe dynamics of the hidden states and response, respectively. It canbe seen that the true value of the causal state lies within the90% confidence interval and that we could infer withsubstantial confidence that the cause was non-zero, when it occurs.Similarly, the true parameter values lie within fairly tightconfidence intervals (red bars in the lower right).
Mentions: Figure 6 summarises theresults after convergence of DEM (about sixteen iterationsusing an embedding order ofn = 6, with a roughnesshyperparameter,γ = 4). Each rowcorresponds to a level in the model, with causes on the left and hiddenstates on the right. The first (upper left) panel shows the predictedresponse and the error on this response. For the hidden states (upper right)and causes (lower left) the conditional mode is depicted by a coloured lineand the 90% conditional confidence intervals by the grey area. Itcan be seen that there is a pleasing correspondence between the conditionalmean and veridical states (grey lines). Furthermore, the true values lielargely within the 90% confidence intervals; similarly for theparameters. This example illustrates the recovery of states, parameters andhyperparameters from observed time-series, given just the form of a model.

Bottom Line: This means that a single model and optimisation scheme can be used to invert a wide range of models.We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data.We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

View Article: PubMed Central - PubMed

Affiliation: The Wellcome Trust Centre of Neuroimaging, University College London, London, United Kingdom. k.friston@fil.ion.ucl.ac.uk

ABSTRACT
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Show MeSH
Related in: MedlinePlus