Limits...
A maximum entropy test for evaluating higher-order correlations in spike counts.

Onken A, Dragoi V, Obermayer K - PLoS Comput. Biol. (2012)

Bottom Line: Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small.These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1.They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly.

View Article: PubMed Central - PubMed

Affiliation: Technische Universität Berlin, Berlin, Germany. arno.onken@unige.ch

ABSTRACT
Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests--for a given divergence measure of interest--whether the experimental data lead to the rejection of the hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly.

Show MeSH

Related in: MedlinePlus

Results of the maximum entropy test for data recorded from area V1 of anesthetized cat.The evaluation was performed separately for the control and adaptation conditions. ( A) Fraction of neuron pairs rejected by the Monte Carlo maximum entropy test with the entropy difference as the divergence measure () and for different bin sizes. ( B) Same as in A but using the mutual information difference. Rejection rates were averaged over all neuron pairs and all time bins. Simulated annealing [45] was applied to maximize the p-value (cf. Text S1). Number  of Monte Carlo samples was 1000. The false discovery rate of the rejections was corrected using the Benjamini-Hochberg procedure [35].
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3369943&req=5

pcbi-1002539-g006: Results of the maximum entropy test for data recorded from area V1 of anesthetized cat.The evaluation was performed separately for the control and adaptation conditions. ( A) Fraction of neuron pairs rejected by the Monte Carlo maximum entropy test with the entropy difference as the divergence measure () and for different bin sizes. ( B) Same as in A but using the mutual information difference. Rejection rates were averaged over all neuron pairs and all time bins. Simulated annealing [45] was applied to maximize the p-value (cf. Text S1). Number of Monte Carlo samples was 1000. The false discovery rate of the rejections was corrected using the Benjamini-Hochberg procedure [35].

Mentions: We applied separate maximum entropy tests to all 55 neuronal pairs and to all time bins (non-overlapping time intervals locked to the start of a grating presentation at varying latencies, cf. Section “Data Analysis”). Figure 6 A shows the results for the entropy difference (Equation 17) as the divergence measure. The rejections were corrected for multiple inferences and averaged over neuron pairs, stimuli and time bins for given bin sizes (cf. Section “Data Analysis”). The fraction of rejected pairs increased with increasing bin size until it reached a maximum at 200 ms. Therefore, as bin size increases, more and more neuron pairs show significant differences between the entropy estimated directly from the data and the entropy estimated using models which neglect higher-order correlations.


A maximum entropy test for evaluating higher-order correlations in spike counts.

Onken A, Dragoi V, Obermayer K - PLoS Comput. Biol. (2012)

Results of the maximum entropy test for data recorded from area V1 of anesthetized cat.The evaluation was performed separately for the control and adaptation conditions. ( A) Fraction of neuron pairs rejected by the Monte Carlo maximum entropy test with the entropy difference as the divergence measure () and for different bin sizes. ( B) Same as in A but using the mutual information difference. Rejection rates were averaged over all neuron pairs and all time bins. Simulated annealing [45] was applied to maximize the p-value (cf. Text S1). Number  of Monte Carlo samples was 1000. The false discovery rate of the rejections was corrected using the Benjamini-Hochberg procedure [35].
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3369943&req=5

pcbi-1002539-g006: Results of the maximum entropy test for data recorded from area V1 of anesthetized cat.The evaluation was performed separately for the control and adaptation conditions. ( A) Fraction of neuron pairs rejected by the Monte Carlo maximum entropy test with the entropy difference as the divergence measure () and for different bin sizes. ( B) Same as in A but using the mutual information difference. Rejection rates were averaged over all neuron pairs and all time bins. Simulated annealing [45] was applied to maximize the p-value (cf. Text S1). Number of Monte Carlo samples was 1000. The false discovery rate of the rejections was corrected using the Benjamini-Hochberg procedure [35].
Mentions: We applied separate maximum entropy tests to all 55 neuronal pairs and to all time bins (non-overlapping time intervals locked to the start of a grating presentation at varying latencies, cf. Section “Data Analysis”). Figure 6 A shows the results for the entropy difference (Equation 17) as the divergence measure. The rejections were corrected for multiple inferences and averaged over neuron pairs, stimuli and time bins for given bin sizes (cf. Section “Data Analysis”). The fraction of rejected pairs increased with increasing bin size until it reached a maximum at 200 ms. Therefore, as bin size increases, more and more neuron pairs show significant differences between the entropy estimated directly from the data and the entropy estimated using models which neglect higher-order correlations.

Bottom Line: Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small.These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1.They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly.

View Article: PubMed Central - PubMed

Affiliation: Technische Universität Berlin, Berlin, Germany. arno.onken@unige.ch

ABSTRACT
Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests--for a given divergence measure of interest--whether the experimental data lead to the rejection of the hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly.

Show MeSH
Related in: MedlinePlus