Limits...
The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

Williamson RS, Sahani M, Pillow JW - PLoS Comput. Biol. (2015)

Bottom Line: Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking.This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson.To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms.

View Article: PubMed Central - PubMed

Affiliation: Gatsby Computational Neuroscience Unit, University College London, London, UK; Centre for Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London, UK.

ABSTRACT
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

No MeSH data available.


The linear-nonlinear-Poisson (LNP) encoding model formalizes the neural encoding process in terms of a cascade of three stages.First, the high-dimensional stimulus s projects onto bank of filters contained in the columns of a matrix K, resulting in a point in a low-dimensional neural feature space K⊤s. Second, an instantaneous nonlinear function f maps the filtered stimulus to an instantaneous spike rate λ. Third, spikes r are generated according to an inhomogeneous Poisson process.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4382343&req=5

pcbi.1004141.g001: The linear-nonlinear-Poisson (LNP) encoding model formalizes the neural encoding process in terms of a cascade of three stages.First, the high-dimensional stimulus s projects onto bank of filters contained in the columns of a matrix K, resulting in a point in a low-dimensional neural feature space K⊤s. Second, an instantaneous nonlinear function f maps the filtered stimulus to an instantaneous spike rate λ. Third, spikes r are generated according to an inhomogeneous Poisson process.

Mentions: Linear-nonlinear cascade models provide a useful framework for describing neural responses to high-dimensional stimuli. These models define the response in terms of a cascade of linear, nonlinear, and probabilistic spiking stages (see Fig. 1). The linear stage reduces the dimensionality by projecting the high-dimensional stimulus onto a set of linear filters, and a nonlinear function then converts the output of these filters to a non-negative spike rate.


The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

Williamson RS, Sahani M, Pillow JW - PLoS Comput. Biol. (2015)

The linear-nonlinear-Poisson (LNP) encoding model formalizes the neural encoding process in terms of a cascade of three stages.First, the high-dimensional stimulus s projects onto bank of filters contained in the columns of a matrix K, resulting in a point in a low-dimensional neural feature space K⊤s. Second, an instantaneous nonlinear function f maps the filtered stimulus to an instantaneous spike rate λ. Third, spikes r are generated according to an inhomogeneous Poisson process.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4382343&req=5

pcbi.1004141.g001: The linear-nonlinear-Poisson (LNP) encoding model formalizes the neural encoding process in terms of a cascade of three stages.First, the high-dimensional stimulus s projects onto bank of filters contained in the columns of a matrix K, resulting in a point in a low-dimensional neural feature space K⊤s. Second, an instantaneous nonlinear function f maps the filtered stimulus to an instantaneous spike rate λ. Third, spikes r are generated according to an inhomogeneous Poisson process.
Mentions: Linear-nonlinear cascade models provide a useful framework for describing neural responses to high-dimensional stimuli. These models define the response in terms of a cascade of linear, nonlinear, and probabilistic spiking stages (see Fig. 1). The linear stage reduces the dimensionality by projecting the high-dimensional stimulus onto a set of linear filters, and a nonlinear function then converts the output of these filters to a non-negative spike rate.

Bottom Line: Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking.This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson.To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms.

View Article: PubMed Central - PubMed

Affiliation: Gatsby Computational Neuroscience Unit, University College London, London, UK; Centre for Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London, UK.

ABSTRACT
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

No MeSH data available.