Limits...
Hierarchical models in the brain.

Friston K - PLoS Comput. Biol. (2008)

Bottom Line: This means that a single model and optimisation scheme can be used to invert a wide range of models.We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data.We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

View Article: PubMed Central - PubMed

Affiliation: The Wellcome Trust Centre of Neuroimaging, University College London, London, United Kingdom. k.friston@fil.ion.ucl.ac.uk

ABSTRACT
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Show MeSH
Example of Factor Analysis using a hierarchical model, in whichthe causes have deterministic and stochastic components.Parameters and causes were sampled from the unit normal density togenerate a response, which was then used for their estimation. Theaim was to recover the causes without knowing the parameters, whichis effected with reasonable accuracy (upper). The conditionalestimates of the causes and parameters are shown in lower panels,along with the increase in free-energy or log-evidence, with thenumber of DEM iterations (lower left). Note that there is anarbitrary affine mapping between the conditional means of the causesand their true values, which we estimated, post hocto show the correspondence in the upper panel.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2570625&req=5

pcbi-1000211-g004: Example of Factor Analysis using a hierarchical model, in whichthe causes have deterministic and stochastic components.Parameters and causes were sampled from the unit normal density togenerate a response, which was then used for their estimation. Theaim was to recover the causes without knowing the parameters, whichis effected with reasonable accuracy (upper). The conditionalestimates of the causes and parameters are shown in lower panels,along with the increase in free-energy or log-evidence, with thenumber of DEM iterations (lower left). Note that there is anarbitrary affine mapping between the conditional means of the causesand their true values, which we estimated, post hocto show the correspondence in the upper panel.

Mentions: The model for factor analysis is exactly the same as for PCA butallowing for observation error(47)When the covariance of the observation error is spherical;e.g.,Σ(1)z = λ(1)zI,this is also known as a probabilistic PCA model [35]. The critical distinction, from the point ofview of the HDM, is that the M-Step is now required to estimatethe error variance. See Figure4 for a simple example of factor analysis using DEM.Nonlinear variants of factor analysis obtain by analogy with Equation 46.


Hierarchical models in the brain.

Friston K - PLoS Comput. Biol. (2008)

Example of Factor Analysis using a hierarchical model, in whichthe causes have deterministic and stochastic components.Parameters and causes were sampled from the unit normal density togenerate a response, which was then used for their estimation. Theaim was to recover the causes without knowing the parameters, whichis effected with reasonable accuracy (upper). The conditionalestimates of the causes and parameters are shown in lower panels,along with the increase in free-energy or log-evidence, with thenumber of DEM iterations (lower left). Note that there is anarbitrary affine mapping between the conditional means of the causesand their true values, which we estimated, post hocto show the correspondence in the upper panel.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2570625&req=5

pcbi-1000211-g004: Example of Factor Analysis using a hierarchical model, in whichthe causes have deterministic and stochastic components.Parameters and causes were sampled from the unit normal density togenerate a response, which was then used for their estimation. Theaim was to recover the causes without knowing the parameters, whichis effected with reasonable accuracy (upper). The conditionalestimates of the causes and parameters are shown in lower panels,along with the increase in free-energy or log-evidence, with thenumber of DEM iterations (lower left). Note that there is anarbitrary affine mapping between the conditional means of the causesand their true values, which we estimated, post hocto show the correspondence in the upper panel.
Mentions: The model for factor analysis is exactly the same as for PCA butallowing for observation error(47)When the covariance of the observation error is spherical;e.g.,Σ(1)z = λ(1)zI,this is also known as a probabilistic PCA model [35]. The critical distinction, from the point ofview of the HDM, is that the M-Step is now required to estimatethe error variance. See Figure4 for a simple example of factor analysis using DEM.Nonlinear variants of factor analysis obtain by analogy with Equation 46.

Bottom Line: This means that a single model and optimisation scheme can be used to invert a wide range of models.We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data.We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

View Article: PubMed Central - PubMed

Affiliation: The Wellcome Trust Centre of Neuroimaging, University College London, London, United Kingdom. k.friston@fil.ion.ucl.ac.uk

ABSTRACT
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Show MeSH