Degree Correlations Optimize Neuronal Network Sensitivity to Sub-Threshold Stimuli.
Bottom Line:
Information processing in the brain crucially depends on the topology of the neuronal connections.We investigate how the topology influences the response of a population of leaky integrate-and-fire neurons to a stimulus.We moreover find that there exists an optimum in assortativity at an intermediate level leading to a maximum in input/output mutual information.
View Article:
PubMed Central - PubMed
Affiliation: Institut für Physik, Humboldt-Universität zu Berlin, Germany.
ABSTRACT
Information processing in the brain crucially depends on the topology of the neuronal connections. We investigate how the topology influences the response of a population of leaky integrate-and-fire neurons to a stimulus. We devise a method to calculate firing rates from a self-consistent system of equations taking into account the degree distribution and degree correlations in the network. We show that assortative degree correlations strongly improve the sensitivity for weak stimuli and propose that such networks possess an advantage in signal processing. We moreover find that there exists an optimum in assortativity at an intermediate level leading to a maximum in input/output mutual information. No MeSH data available. Related in: MedlinePlus |
Related In:
Results -
Collection
License getmorefigures.php?uid=PMC4482728&req=5
Mentions: Exemplary plots of P(r∣s) from simulations are shown in Fig 6 together with the approximation of Eq (30). Since the distribution of single-neuron firing rates is broadened by assortativity (Fig 3B), the corresponding response r is more noisy in the assortative network, arguing that the assortative network is less reliable in encoding the signal. However, assortative networks respond to very weak stimuli (s < 0.8), where the uncorrelated network cannot fire, which results in increased sensitivity. Therefore, a quantitative approach is needed to draw conclusion about the signal transmission capabilities of the networks. We calculate I using a small noise approximation [51] by expanding Eq (29) as a power series in I=-∫dr^P(r^)log2[P(r^)]-12∫dr^P(r^)log2[2πeσ2(r^)/n]+⋯,(31)where the first term (which we denote as H) approximates the response variability, or entropy, and the second term corresponds to the noise entropy Hnoise so that I = H − Hnoise. Higher order terms vanish as noise decreases. is the probability distribution of mean rates in the absence of sampling noise when the network is exposed to a distribution P(s) of stimuliP(r^)=∫dsP(s)δ[r^-r^(s)]=(dr^(s)ds)−1P[s=s(r^)].(32) |
View Article: PubMed Central - PubMed
Affiliation: Institut für Physik, Humboldt-Universität zu Berlin, Germany.
No MeSH data available.