Limits...
A formalism for evaluating analytically the cross-correlation structure of a firing-rate network model.

Fasoli D, Faugeras O, Panzeri S - J Math Neurosci (2015)

Bottom Line: In particular, we show that a strong input can make the neurons almost independent, suggesting that functional connectivity does not depend only on the static anatomical connectivity, but also on the external inputs.To conclude, we show a very counterintuitive phenomenon, which we call stochastic synchronization, through which neurons become almost perfectly correlated even if the sources of randomness are independent.Due to its ability to quantify how activity of individual neurons and the correlation among them depends upon external inputs, the formalism introduced here can serve as a basis for exploring analytically the computational capability of population codes expressed by recurrent neural networks.

View Article: PubMed Central - PubMed

Affiliation: NeuroMathComp Laboratory, INRIA Sophia Antipolis Méditerranée, 2004 Route des Lucioles, BP 93, 06902 Valbonne, France ; Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @Unitn, Istituto Italiano di Tecnologia, 38068 Rovereto, Italy.

ABSTRACT
We introduce a new formalism for evaluating analytically the cross-correlation structure of a finite-size firing-rate network with recurrent connections. The analysis performs a first-order perturbative expansion of neural activity equations that include three different sources of randomness: the background noise of the membrane potentials, their initial conditions, and the distribution of the recurrent synaptic weights. This allows the analytical quantification of the relationship between anatomical and functional connectivity, i.e. of how the synaptic connections determine the statistical dependencies at any order among different neurons. The technique we develop is general, but for simplicity and clarity we demonstrate its efficacy by applying it to the case of synaptic connections described by regular graphs. The analytical equations so obtained reveal previously unknown behaviors of recurrent firing-rate networks, especially on how correlations are modified by the external input, by the finite size of the network, by the density of the anatomical connections and by correlation in sources of randomness. In particular, we show that a strong input can make the neurons almost independent, suggesting that functional connectivity does not depend only on the static anatomical connectivity, but also on the external inputs. Moreover we prove that in general it is not possible to find a mean-field description à la Sznitman of the network, if the anatomical connections are too sparse or our three sources of variability are correlated. To conclude, we show a very counterintuitive phenomenon, which we call stochastic synchronization, through which neurons become almost perfectly correlated even if the sources of randomness are independent. Due to its ability to quantify how activity of individual neurons and the correlation among them depends upon external inputs, the formalism introduced here can serve as a basis for exploring analytically the computational capability of population codes expressed by recurrent neural networks.

No MeSH data available.


Related in: MedlinePlus

Percentage-relative error of the correlation calculated between the first-order perturbative expansion and the numerical simulation of the neural network (left) and the probability  defined by (4.13) (right), for . The error is small () even for relatively large values of the perturbative parameter (), which proves the goodness of the perturbative approach. ε% increases considerably for , but this result has not been shown, since such values correspond to biologically unrealistic levels of randomness for a neural network. On the other hand, the figure shows that , which further confirms the legitimacy of the Taylor expansion (3.2) and therefore the validity of our results. Clearly  decreases with σ because a larger variance brings the membrane potential closer to the borders defined by the radius of convergence
© Copyright Policy - OpenAccess
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4385226&req=5

Fig6: Percentage-relative error of the correlation calculated between the first-order perturbative expansion and the numerical simulation of the neural network (left) and the probability defined by (4.13) (right), for . The error is small () even for relatively large values of the perturbative parameter (), which proves the goodness of the perturbative approach. ε% increases considerably for , but this result has not been shown, since such values correspond to biologically unrealistic levels of randomness for a neural network. On the other hand, the figure shows that , which further confirms the legitimacy of the Taylor expansion (3.2) and therefore the validity of our results. Clearly decreases with σ because a larger variance brings the membrane potential closer to the borders defined by the radius of convergence

Mentions: In this section we show that our first-order perturbative expansion is in good agreement with the real behavior of the neural network obtained from the simulation of the system (2.1). These stochastic differential equations have been solved numerically times with the Euler–Maruyama scheme, and this collection of trials has been used to calculate the correlation by a Monte Carlo method (the code, running under Python 2.6, is available in the Supplementary Material). This result is then compared to the perturbative formula of the correlation obtained in the previous sections. The topologies that have been chosen for this comparison are , , and (see Figs. 2 and 3), while the values of the parameters used in the numerical simulations are shown in Table 1. Moreover, the variable part of the synaptic weights and the external input currents have been chosen as follows: 7.1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \begin{aligned} J_{ij}^{v} (t )&= \textstyle\begin{cases} \frac{1}{1+t^{2}}, & i,j=0,\ldots,\frac{N}{2}-1, \\ \frac{1}{2} [1+\operatorname{erf} (2t ) ], & i=0,\ldots,\frac{N}{2}-1, j=\frac{N}{2},\ldots,N-1, \\ \frac{1}{2} [1+e^{-t}\cos (3t ) ], & i=\frac {N}{2},\ldots,N-1, j=0,\ldots,\frac{N}{2}-1, \\ 1, & i,j=\frac{N}{2},\ldots,N-1. \end{cases}\displaystyle \\ I_{i}^{v} (t )&= \textstyle\begin{cases} \sin (4t ), & i=0,\ldots,\frac{N}{2}-1, \\ 1-e^{-2t}, & i=\frac{N}{2},\ldots,N-1. \end{cases}\displaystyle \end{aligned} $$\end{document}Jijv(t)={11+t2,i,j=0,…,N2−1,12[1+erf(2t)],i=0,…,N2−1,j=N2,…,N−1,12[1+e−tcos(3t)],i=N2,…,N−1,j=0,…,N2−1,1,i,j=N2,…,N−1.Iiv(t)={sin(4t),i=0,…,N2−1,1−e−2t,i=N2,…,N−1. We plot this comparison as a function of time (Fig. 5) and also the percentage-relative error \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon\%=100\times\biggl\vert \frac{\mathrm{numerical\ Corr}-\mathrm {fir{st}\ order\ perturbative\ Corr}}{\mathrm{numerical\ Corr}}\biggr\vert $$\end{document}ε%=100×/numericalCorr−firstorderperturbativeCorrnumericalCorr/ as a function of the perturbative parameters (left-hand side of Fig. 6). In order to avoid high dimensional plots, we assume that . Fig. 5


A formalism for evaluating analytically the cross-correlation structure of a firing-rate network model.

Fasoli D, Faugeras O, Panzeri S - J Math Neurosci (2015)

Percentage-relative error of the correlation calculated between the first-order perturbative expansion and the numerical simulation of the neural network (left) and the probability  defined by (4.13) (right), for . The error is small () even for relatively large values of the perturbative parameter (), which proves the goodness of the perturbative approach. ε% increases considerably for , but this result has not been shown, since such values correspond to biologically unrealistic levels of randomness for a neural network. On the other hand, the figure shows that , which further confirms the legitimacy of the Taylor expansion (3.2) and therefore the validity of our results. Clearly  decreases with σ because a larger variance brings the membrane potential closer to the borders defined by the radius of convergence
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4385226&req=5

Fig6: Percentage-relative error of the correlation calculated between the first-order perturbative expansion and the numerical simulation of the neural network (left) and the probability defined by (4.13) (right), for . The error is small () even for relatively large values of the perturbative parameter (), which proves the goodness of the perturbative approach. ε% increases considerably for , but this result has not been shown, since such values correspond to biologically unrealistic levels of randomness for a neural network. On the other hand, the figure shows that , which further confirms the legitimacy of the Taylor expansion (3.2) and therefore the validity of our results. Clearly decreases with σ because a larger variance brings the membrane potential closer to the borders defined by the radius of convergence
Mentions: In this section we show that our first-order perturbative expansion is in good agreement with the real behavior of the neural network obtained from the simulation of the system (2.1). These stochastic differential equations have been solved numerically times with the Euler–Maruyama scheme, and this collection of trials has been used to calculate the correlation by a Monte Carlo method (the code, running under Python 2.6, is available in the Supplementary Material). This result is then compared to the perturbative formula of the correlation obtained in the previous sections. The topologies that have been chosen for this comparison are , , and (see Figs. 2 and 3), while the values of the parameters used in the numerical simulations are shown in Table 1. Moreover, the variable part of the synaptic weights and the external input currents have been chosen as follows: 7.1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \begin{aligned} J_{ij}^{v} (t )&= \textstyle\begin{cases} \frac{1}{1+t^{2}}, & i,j=0,\ldots,\frac{N}{2}-1, \\ \frac{1}{2} [1+\operatorname{erf} (2t ) ], & i=0,\ldots,\frac{N}{2}-1, j=\frac{N}{2},\ldots,N-1, \\ \frac{1}{2} [1+e^{-t}\cos (3t ) ], & i=\frac {N}{2},\ldots,N-1, j=0,\ldots,\frac{N}{2}-1, \\ 1, & i,j=\frac{N}{2},\ldots,N-1. \end{cases}\displaystyle \\ I_{i}^{v} (t )&= \textstyle\begin{cases} \sin (4t ), & i=0,\ldots,\frac{N}{2}-1, \\ 1-e^{-2t}, & i=\frac{N}{2},\ldots,N-1. \end{cases}\displaystyle \end{aligned} $$\end{document}Jijv(t)={11+t2,i,j=0,…,N2−1,12[1+erf(2t)],i=0,…,N2−1,j=N2,…,N−1,12[1+e−tcos(3t)],i=N2,…,N−1,j=0,…,N2−1,1,i,j=N2,…,N−1.Iiv(t)={sin(4t),i=0,…,N2−1,1−e−2t,i=N2,…,N−1. We plot this comparison as a function of time (Fig. 5) and also the percentage-relative error \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon\%=100\times\biggl\vert \frac{\mathrm{numerical\ Corr}-\mathrm {fir{st}\ order\ perturbative\ Corr}}{\mathrm{numerical\ Corr}}\biggr\vert $$\end{document}ε%=100×/numericalCorr−firstorderperturbativeCorrnumericalCorr/ as a function of the perturbative parameters (left-hand side of Fig. 6). In order to avoid high dimensional plots, we assume that . Fig. 5

Bottom Line: In particular, we show that a strong input can make the neurons almost independent, suggesting that functional connectivity does not depend only on the static anatomical connectivity, but also on the external inputs.To conclude, we show a very counterintuitive phenomenon, which we call stochastic synchronization, through which neurons become almost perfectly correlated even if the sources of randomness are independent.Due to its ability to quantify how activity of individual neurons and the correlation among them depends upon external inputs, the formalism introduced here can serve as a basis for exploring analytically the computational capability of population codes expressed by recurrent neural networks.

View Article: PubMed Central - PubMed

Affiliation: NeuroMathComp Laboratory, INRIA Sophia Antipolis Méditerranée, 2004 Route des Lucioles, BP 93, 06902 Valbonne, France ; Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @Unitn, Istituto Italiano di Tecnologia, 38068 Rovereto, Italy.

ABSTRACT
We introduce a new formalism for evaluating analytically the cross-correlation structure of a finite-size firing-rate network with recurrent connections. The analysis performs a first-order perturbative expansion of neural activity equations that include three different sources of randomness: the background noise of the membrane potentials, their initial conditions, and the distribution of the recurrent synaptic weights. This allows the analytical quantification of the relationship between anatomical and functional connectivity, i.e. of how the synaptic connections determine the statistical dependencies at any order among different neurons. The technique we develop is general, but for simplicity and clarity we demonstrate its efficacy by applying it to the case of synaptic connections described by regular graphs. The analytical equations so obtained reveal previously unknown behaviors of recurrent firing-rate networks, especially on how correlations are modified by the external input, by the finite size of the network, by the density of the anatomical connections and by correlation in sources of randomness. In particular, we show that a strong input can make the neurons almost independent, suggesting that functional connectivity does not depend only on the static anatomical connectivity, but also on the external inputs. Moreover we prove that in general it is not possible to find a mean-field description à la Sznitman of the network, if the anatomical connections are too sparse or our three sources of variability are correlated. To conclude, we show a very counterintuitive phenomenon, which we call stochastic synchronization, through which neurons become almost perfectly correlated even if the sources of randomness are independent. Due to its ability to quantify how activity of individual neurons and the correlation among them depends upon external inputs, the formalism introduced here can serve as a basis for exploring analytically the computational capability of population codes expressed by recurrent neural networks.

No MeSH data available.


Related in: MedlinePlus