Limits...
Hierarchical models in the brain.

Friston K - PLoS Comput. Biol. (2008)

Bottom Line: This means that a single model and optimisation scheme can be used to invert a wide range of models.We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data.We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

View Article: PubMed Central - PubMed

Affiliation: The Wellcome Trust Centre of Neuroimaging, University College London, London, United Kingdom. k.friston@fil.ion.ucl.ac.uk

ABSTRACT
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Show MeSH
Schematic detailing the neuronal architectures that encode anensemble density on the states and parameters of hierarchicalmodels.This schematic shows how the neuronal populations of the previousfigure may be deployed hierarchically within three cortical areas(or macro-columns). Within each area the cells are shown in relationto the laminar structure of the cortex that includes supra-granular(SG) granular (L4) and infra-granular (IG) layers.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2570625&req=5

pcbi-1000211-g009: Schematic detailing the neuronal architectures that encode anensemble density on the states and parameters of hierarchicalmodels.This schematic shows how the neuronal populations of the previousfigure may be deployed hierarchically within three cortical areas(or macro-columns). Within each area the cells are shown in relationto the laminar structure of the cortex that includes supra-granular(SG) granular (L4) and infra-granular (IG) layers.

Mentions: We can identify error-units with superficial pyramidal cells, because theonly messages that pass up the hierarchy are prediction errors andsuperficial pyramidal cells originate forward connections in the brain. Thisis useful because it is these cells that are primarily responsible forelectroencephalographic (EEG) signals that can be measured non-invasively.Similarly the only messages that are passed down the hierarchy are thepredictions from state-units that are necessary to form prediction errors inlower levels. The sources of extrinsic backward connections are largely thedeep pyramidal cells and one might deduce that these encode the expectedcauses of sensory states (see [49] and Figure 9). Critically, themotion of each state-unit is a linear mixture of bottom-up prediction error;see Equation 52. This is exactly what is observed physiologically; in thatbottom-up driving inputs elicit obligatory responses that do not depend onother bottom-up inputs. The prediction error itself is formed by predictionsconveyed by backward and lateral connections. These influences embody thenonlinearities implicit ing̃(i) andf̃(i). Again,this is entirely consistent with the nonlinear or modulatory characteristicsof backward connections.


Hierarchical models in the brain.

Friston K - PLoS Comput. Biol. (2008)

Schematic detailing the neuronal architectures that encode anensemble density on the states and parameters of hierarchicalmodels.This schematic shows how the neuronal populations of the previousfigure may be deployed hierarchically within three cortical areas(or macro-columns). Within each area the cells are shown in relationto the laminar structure of the cortex that includes supra-granular(SG) granular (L4) and infra-granular (IG) layers.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2570625&req=5

pcbi-1000211-g009: Schematic detailing the neuronal architectures that encode anensemble density on the states and parameters of hierarchicalmodels.This schematic shows how the neuronal populations of the previousfigure may be deployed hierarchically within three cortical areas(or macro-columns). Within each area the cells are shown in relationto the laminar structure of the cortex that includes supra-granular(SG) granular (L4) and infra-granular (IG) layers.
Mentions: We can identify error-units with superficial pyramidal cells, because theonly messages that pass up the hierarchy are prediction errors andsuperficial pyramidal cells originate forward connections in the brain. Thisis useful because it is these cells that are primarily responsible forelectroencephalographic (EEG) signals that can be measured non-invasively.Similarly the only messages that are passed down the hierarchy are thepredictions from state-units that are necessary to form prediction errors inlower levels. The sources of extrinsic backward connections are largely thedeep pyramidal cells and one might deduce that these encode the expectedcauses of sensory states (see [49] and Figure 9). Critically, themotion of each state-unit is a linear mixture of bottom-up prediction error;see Equation 52. This is exactly what is observed physiologically; in thatbottom-up driving inputs elicit obligatory responses that do not depend onother bottom-up inputs. The prediction error itself is formed by predictionsconveyed by backward and lateral connections. These influences embody thenonlinearities implicit ing̃(i) andf̃(i). Again,this is entirely consistent with the nonlinear or modulatory characteristicsof backward connections.

Bottom Line: This means that a single model and optimisation scheme can be used to invert a wide range of models.We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data.We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

View Article: PubMed Central - PubMed

Affiliation: The Wellcome Trust Centre of Neuroimaging, University College London, London, United Kingdom. k.friston@fil.ion.ucl.ac.uk

ABSTRACT
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Show MeSH