Limits...
Hierarchical models in the brain.

Friston K - PLoS Comput. Biol. (2008)

Bottom Line: This means that a single model and optimisation scheme can be used to invert a wide range of models.We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data.We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

View Article: PubMed Central - PubMed

Affiliation: The Wellcome Trust Centre of Neuroimaging, University College London, London, United Kingdom. k.friston@fil.ion.ucl.ac.uk

ABSTRACT
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Show MeSH

Related in: MedlinePlus

Ontology of models starting with a simple general linear modelwith two levels (the PCA model).This ontology is one of many that could be constructed and is basedon the fact that hierarchical dynamic models have several attributesthat can be combined to create an infinite number of models; some ofwhich are shown in the figure. These attributes include; (i) thenumber of levels or depth; (ii) for each level, linear or nonlinearoutput functions; (iii) with or without random fluctuations; (iii)static or dynamic (iv), for dynamic levels, linear or nonlinearequations of motion; (v) with or without state noise and, finally,(vi) with or without generalised coordinates.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2570625&req=5

pcbi-1000211-g007: Ontology of models starting with a simple general linear modelwith two levels (the PCA model).This ontology is one of many that could be constructed and is basedon the fact that hierarchical dynamic models have several attributesthat can be combined to create an infinite number of models; some ofwhich are shown in the figure. These attributes include; (i) thenumber of levels or depth; (ii) for each level, linear or nonlinearoutput functions; (iii) with or without random fluctuations; (iii)static or dynamic (iv), for dynamic levels, linear or nonlinearequations of motion; (v) with or without state noise and, finally,(vi) with or without generalised coordinates.

Mentions: This section has tried to show that the HDM encompasses many standard staticand dynamic observation models. It is further evident than many of thesemodels could be extended easily within the hierarchical framework. Figure 7 illustrates thisby providing a ontology of models that rests on the various constraintsunder which HDMs are specified. This partial list suggests that only aproportion of potential models have been covered in this section.


Hierarchical models in the brain.

Friston K - PLoS Comput. Biol. (2008)

Ontology of models starting with a simple general linear modelwith two levels (the PCA model).This ontology is one of many that could be constructed and is basedon the fact that hierarchical dynamic models have several attributesthat can be combined to create an infinite number of models; some ofwhich are shown in the figure. These attributes include; (i) thenumber of levels or depth; (ii) for each level, linear or nonlinearoutput functions; (iii) with or without random fluctuations; (iii)static or dynamic (iv), for dynamic levels, linear or nonlinearequations of motion; (v) with or without state noise and, finally,(vi) with or without generalised coordinates.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2570625&req=5

pcbi-1000211-g007: Ontology of models starting with a simple general linear modelwith two levels (the PCA model).This ontology is one of many that could be constructed and is basedon the fact that hierarchical dynamic models have several attributesthat can be combined to create an infinite number of models; some ofwhich are shown in the figure. These attributes include; (i) thenumber of levels or depth; (ii) for each level, linear or nonlinearoutput functions; (iii) with or without random fluctuations; (iii)static or dynamic (iv), for dynamic levels, linear or nonlinearequations of motion; (v) with or without state noise and, finally,(vi) with or without generalised coordinates.
Mentions: This section has tried to show that the HDM encompasses many standard staticand dynamic observation models. It is further evident than many of thesemodels could be extended easily within the hierarchical framework. Figure 7 illustrates thisby providing a ontology of models that rests on the various constraintsunder which HDMs are specified. This partial list suggests that only aproportion of potential models have been covered in this section.

Bottom Line: This means that a single model and optimisation scheme can be used to invert a wide range of models.We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data.We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

View Article: PubMed Central - PubMed

Affiliation: The Wellcome Trust Centre of Neuroimaging, University College London, London, United Kingdom. k.friston@fil.ion.ucl.ac.uk

ABSTRACT
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Show MeSH
Related in: MedlinePlus