Limits...
Hierarchical models in the brain.

Friston K - PLoS Comput. Biol. (2008)

Bottom Line: This means that a single model and optimisation scheme can be used to invert a wide range of models.We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data.We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

View Article: PubMed Central - PubMed

Affiliation: The Wellcome Trust Centre of Neuroimaging, University College London, London, United Kingdom. k.friston@fil.ion.ucl.ac.uk

ABSTRACT
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Show MeSH
Schematic detailing the neuronal architectures that encode anensemble density on the states and parameters of one level in ahierarchical model.This schematic shows the speculative cells of origin of forwarddriving connections that convey prediction error from a lower areato a higher area and the backward connections that are used toconstruct predictions. These predictions try to explain away inputfrom lower areas by suppressing prediction error. In this scheme,the sources of forward connections are the superficial pyramidalcell population and the sources of backward connections are the deeppyramidal cell population. The differential equations relate to theoptimisation scheme detailed in the main text and their constituentterms are placed alongside the corresponding connections. Thestate-units and their efferents are in black and the error-units inred, with causes on the left and hidden states on the right. Forsimplicity, we have assumed the output of each level is a functionof, and only of, the hidden states. This induces a hierarchy overlevels and, within each level, a hierarchical relationship betweenstates, where hidden states predict causes.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2570625&req=5

pcbi-1000211-g008: Schematic detailing the neuronal architectures that encode anensemble density on the states and parameters of one level in ahierarchical model.This schematic shows the speculative cells of origin of forwarddriving connections that convey prediction error from a lower areato a higher area and the backward connections that are used toconstruct predictions. These predictions try to explain away inputfrom lower areas by suppressing prediction error. In this scheme,the sources of forward connections are the superficial pyramidalcell population and the sources of backward connections are the deeppyramidal cell population. The differential equations relate to theoptimisation scheme detailed in the main text and their constituentterms are placed alongside the corresponding connections. Thestate-units and their efferents are in black and the error-units inred, with causes on the left and hidden states on the right. Forsimplicity, we have assumed the output of each level is a functionof, and only of, the hidden states. This induces a hierarchy overlevels and, within each level, a hierarchical relationship betweenstates, where hidden states predict causes.

Mentions: If we unpack these equations we can see the hierarchical nature of thismessage passing (see Figure 8).(52)This shows that error-units receive messages from the statesin the same level and the level above, whereas states are driven byerror-units in the same level and the level below. Critically, inferencerequires only the prediction error from the lower levelξ(i) and thelevel in question,ξ(i+1).These constitute bottom-up and lateral messages that drive conditional means towards a better prediction, to explain away theprediction error in the level below. These top-down and lateral predictionscorrespond to g̃(i)and f̃(i). This isthe essence of recurrent message passing between hierarchical levels tooptimise free-energy or suppress prediction error; i.e., recognitiondynamics.


Hierarchical models in the brain.

Friston K - PLoS Comput. Biol. (2008)

Schematic detailing the neuronal architectures that encode anensemble density on the states and parameters of one level in ahierarchical model.This schematic shows the speculative cells of origin of forwarddriving connections that convey prediction error from a lower areato a higher area and the backward connections that are used toconstruct predictions. These predictions try to explain away inputfrom lower areas by suppressing prediction error. In this scheme,the sources of forward connections are the superficial pyramidalcell population and the sources of backward connections are the deeppyramidal cell population. The differential equations relate to theoptimisation scheme detailed in the main text and their constituentterms are placed alongside the corresponding connections. Thestate-units and their efferents are in black and the error-units inred, with causes on the left and hidden states on the right. Forsimplicity, we have assumed the output of each level is a functionof, and only of, the hidden states. This induces a hierarchy overlevels and, within each level, a hierarchical relationship betweenstates, where hidden states predict causes.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2570625&req=5

pcbi-1000211-g008: Schematic detailing the neuronal architectures that encode anensemble density on the states and parameters of one level in ahierarchical model.This schematic shows the speculative cells of origin of forwarddriving connections that convey prediction error from a lower areato a higher area and the backward connections that are used toconstruct predictions. These predictions try to explain away inputfrom lower areas by suppressing prediction error. In this scheme,the sources of forward connections are the superficial pyramidalcell population and the sources of backward connections are the deeppyramidal cell population. The differential equations relate to theoptimisation scheme detailed in the main text and their constituentterms are placed alongside the corresponding connections. Thestate-units and their efferents are in black and the error-units inred, with causes on the left and hidden states on the right. Forsimplicity, we have assumed the output of each level is a functionof, and only of, the hidden states. This induces a hierarchy overlevels and, within each level, a hierarchical relationship betweenstates, where hidden states predict causes.
Mentions: If we unpack these equations we can see the hierarchical nature of thismessage passing (see Figure 8).(52)This shows that error-units receive messages from the statesin the same level and the level above, whereas states are driven byerror-units in the same level and the level below. Critically, inferencerequires only the prediction error from the lower levelξ(i) and thelevel in question,ξ(i+1).These constitute bottom-up and lateral messages that drive conditional means towards a better prediction, to explain away theprediction error in the level below. These top-down and lateral predictionscorrespond to g̃(i)and f̃(i). This isthe essence of recurrent message passing between hierarchical levels tooptimise free-energy or suppress prediction error; i.e., recognitiondynamics.

Bottom Line: This means that a single model and optimisation scheme can be used to invert a wide range of models.We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data.We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

View Article: PubMed Central - PubMed

Affiliation: The Wellcome Trust Centre of Neuroimaging, University College London, London, United Kingdom. k.friston@fil.ion.ucl.ac.uk

ABSTRACT
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Show MeSH