Limits...
A maximum entropy test for evaluating higher-order correlations in spike counts.

Onken A, Dragoi V, Obermayer K - PLoS Comput. Biol. (2012)

Bottom Line: Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small.These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1.They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly.

View Article: PubMed Central - PubMed

Affiliation: Technische Universität Berlin, Berlin, Germany. arno.onken@unige.ch

ABSTRACT
Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests--for a given divergence measure of interest--whether the experimental data lead to the rejection of the hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly.

Show MeSH

Related in: MedlinePlus

Effect of autocorrelations on the Monte Carlo maximum entropy test results (blue) and on the discrete Kolmogorov-Smirnov test results (red).Interspike intervals of two concurrent spike trains were sampled from a gamma distribution with constant rate  and gamma parameter . Spike counts were calculated over subsequent 100 ms bins. The entropy difference was used as a divergence measure. Significance level was . Rates were estimated over 100 trials. ( A) 50 spike count pairs were sampled for each test trial. ( B) 100 spike count pairs were sampled for each test trial.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3369943&req=5

pcbi-1002539-g004: Effect of autocorrelations on the Monte Carlo maximum entropy test results (blue) and on the discrete Kolmogorov-Smirnov test results (red).Interspike intervals of two concurrent spike trains were sampled from a gamma distribution with constant rate and gamma parameter . Spike counts were calculated over subsequent 100 ms bins. The entropy difference was used as a divergence measure. Significance level was . Rates were estimated over 100 trials. ( A) 50 spike count pairs were sampled for each test trial. ( B) 100 spike count pairs were sampled for each test trial.

Mentions: In each trial we simulated two concurrent spike trains. The goodness-of-fit of a Poisson process was assessed with a Kolmogorov-Smirnov test at a significance level (c.f. Section “Poisson Goodness-of-fit Tests” in Text S1). Moreover, we binned the spike trains into 100 ms intervals and calculated simultaneous spike count pairs. We then applied our proposed maximum entropy test to the spike count pairs. Figure 4 shows the rejections rates for ( A) 50 samples (corresponding to spike trains of length 5 s) and ( B) 100 samples (corresponding to spike trains of length 10 s) over 100 trials. The rejection rates of both tests increase with . This reflects that the deviation from Poisson processes increases. Furthermore, both tests have greater rejection rates when applied to 100 samples ( B) than when applied to 50 samples ( A).


A maximum entropy test for evaluating higher-order correlations in spike counts.

Onken A, Dragoi V, Obermayer K - PLoS Comput. Biol. (2012)

Effect of autocorrelations on the Monte Carlo maximum entropy test results (blue) and on the discrete Kolmogorov-Smirnov test results (red).Interspike intervals of two concurrent spike trains were sampled from a gamma distribution with constant rate  and gamma parameter . Spike counts were calculated over subsequent 100 ms bins. The entropy difference was used as a divergence measure. Significance level was . Rates were estimated over 100 trials. ( A) 50 spike count pairs were sampled for each test trial. ( B) 100 spike count pairs were sampled for each test trial.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3369943&req=5

pcbi-1002539-g004: Effect of autocorrelations on the Monte Carlo maximum entropy test results (blue) and on the discrete Kolmogorov-Smirnov test results (red).Interspike intervals of two concurrent spike trains were sampled from a gamma distribution with constant rate and gamma parameter . Spike counts were calculated over subsequent 100 ms bins. The entropy difference was used as a divergence measure. Significance level was . Rates were estimated over 100 trials. ( A) 50 spike count pairs were sampled for each test trial. ( B) 100 spike count pairs were sampled for each test trial.
Mentions: In each trial we simulated two concurrent spike trains. The goodness-of-fit of a Poisson process was assessed with a Kolmogorov-Smirnov test at a significance level (c.f. Section “Poisson Goodness-of-fit Tests” in Text S1). Moreover, we binned the spike trains into 100 ms intervals and calculated simultaneous spike count pairs. We then applied our proposed maximum entropy test to the spike count pairs. Figure 4 shows the rejections rates for ( A) 50 samples (corresponding to spike trains of length 5 s) and ( B) 100 samples (corresponding to spike trains of length 10 s) over 100 trials. The rejection rates of both tests increase with . This reflects that the deviation from Poisson processes increases. Furthermore, both tests have greater rejection rates when applied to 100 samples ( B) than when applied to 50 samples ( A).

Bottom Line: Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small.These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1.They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly.

View Article: PubMed Central - PubMed

Affiliation: Technische Universität Berlin, Berlin, Germany. arno.onken@unige.ch

ABSTRACT
Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests--for a given divergence measure of interest--whether the experimental data lead to the rejection of the hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly.

Show MeSH
Related in: MedlinePlus