Limits...
Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity.

Zaytsev YV, Morrison A, Deger M - J Comput Neurosci (2015)

Bottom Line: Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities.Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups.Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

View Article: PubMed Central - PubMed

Affiliation: Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center, Jülich, Germany, yury@zaytsev.net.

ABSTRACT
Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1 % under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

No MeSH data available.


Reconstruction of a random balanced network of GLM neurons. The reconstruction was performed for τ=20 ms, d=1.5 ms and ds=0.1 ms. a Gaussian Mixture Model fit for the probability density function of the elements of the reconstructed synaptic weight matrix J (black solid curve) and individual components contributed by excitatory (red solid curve, 〈Je〉=1.004 mV), inhibitory (blue solid curve, 〈Ji〉=−5.023 mV) and  (green solid curve, 〈JØ〉=−0.002 mV) connections. For comparison, we plot as histograms of n=200 bins the distributions of the reconstructed synaptic weights, partitioned into three classes and colored according to whether the corresponding entry in the ground truth connectivity matrix was JØ=0 mV (unconnected; green), Je=1 mV (excitatory; red) or Ji=−5 mV (inhibitory; blue). A perfect reconstruction would result in delta peaks at the three synaptic strength values of the original connectivity matrix, marked with red, blue and green dashed vertical lines. The scale of the vertical axis is logarithmic, except for the first decade, which is in linear scale. b, c Distributions of the identified base rates of the neurons and weights of the self-connections approximated with histograms and Gaussian KDEs. Black dashed vertical lines mark the ground truth values which should have been recovered (c=5 s−1 and Js=−25 mV respectively)
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4493949&req=5

Fig3: Reconstruction of a random balanced network of GLM neurons. The reconstruction was performed for τ=20 ms, d=1.5 ms and ds=0.1 ms. a Gaussian Mixture Model fit for the probability density function of the elements of the reconstructed synaptic weight matrix J (black solid curve) and individual components contributed by excitatory (red solid curve, 〈Je〉=1.004 mV), inhibitory (blue solid curve, 〈Ji〉=−5.023 mV) and (green solid curve, 〈JØ〉=−0.002 mV) connections. For comparison, we plot as histograms of n=200 bins the distributions of the reconstructed synaptic weights, partitioned into three classes and colored according to whether the corresponding entry in the ground truth connectivity matrix was JØ=0 mV (unconnected; green), Je=1 mV (excitatory; red) or Ji=−5 mV (inhibitory; blue). A perfect reconstruction would result in delta peaks at the three synaptic strength values of the original connectivity matrix, marked with red, blue and green dashed vertical lines. The scale of the vertical axis is logarithmic, except for the first decade, which is in linear scale. b, c Distributions of the identified base rates of the neurons and weights of the self-connections approximated with histograms and Gaussian KDEs. Black dashed vertical lines mark the ground truth values which should have been recovered (c=5 s−1 and Js=−25 mV respectively)

Mentions: To assess the degree to which undersampling deteriorates the quality of the network reconstructions, we performed several experiments with different datasets, each being a subsample of the original one presented in Section 3.1. In each experiment we randomly selected a fraction of neurons (maintaining the ratio of excitatory and inhibitory neurons) that are fed into the optimizer. The results are shown in Fig. 11. In contrast to Fig. 3, here the connections were classified using k-means as described in Section 3.3, which is more robust in the undersampled cases. Therefore, for the case of N=1000 neurons the MER is slightly higher than when classified using GMM, as reported in Table 2. As expected, the MER of the partial network increased as we decreased the number of neurons that were visible to the GLM (undersampling). This was largely due to the broadening of the distribution of the synaptic weights for connections (data not shown, cf. Figure 3). Yet, in all cases, synapse classification based on the reconstruction method was substantially better than random classification of the synapses, see Appendix C.1 for the derivation of the chance level MER.Fig. 11


Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity.

Zaytsev YV, Morrison A, Deger M - J Comput Neurosci (2015)

Reconstruction of a random balanced network of GLM neurons. The reconstruction was performed for τ=20 ms, d=1.5 ms and ds=0.1 ms. a Gaussian Mixture Model fit for the probability density function of the elements of the reconstructed synaptic weight matrix J (black solid curve) and individual components contributed by excitatory (red solid curve, 〈Je〉=1.004 mV), inhibitory (blue solid curve, 〈Ji〉=−5.023 mV) and  (green solid curve, 〈JØ〉=−0.002 mV) connections. For comparison, we plot as histograms of n=200 bins the distributions of the reconstructed synaptic weights, partitioned into three classes and colored according to whether the corresponding entry in the ground truth connectivity matrix was JØ=0 mV (unconnected; green), Je=1 mV (excitatory; red) or Ji=−5 mV (inhibitory; blue). A perfect reconstruction would result in delta peaks at the three synaptic strength values of the original connectivity matrix, marked with red, blue and green dashed vertical lines. The scale of the vertical axis is logarithmic, except for the first decade, which is in linear scale. b, c Distributions of the identified base rates of the neurons and weights of the self-connections approximated with histograms and Gaussian KDEs. Black dashed vertical lines mark the ground truth values which should have been recovered (c=5 s−1 and Js=−25 mV respectively)
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4493949&req=5

Fig3: Reconstruction of a random balanced network of GLM neurons. The reconstruction was performed for τ=20 ms, d=1.5 ms and ds=0.1 ms. a Gaussian Mixture Model fit for the probability density function of the elements of the reconstructed synaptic weight matrix J (black solid curve) and individual components contributed by excitatory (red solid curve, 〈Je〉=1.004 mV), inhibitory (blue solid curve, 〈Ji〉=−5.023 mV) and (green solid curve, 〈JØ〉=−0.002 mV) connections. For comparison, we plot as histograms of n=200 bins the distributions of the reconstructed synaptic weights, partitioned into three classes and colored according to whether the corresponding entry in the ground truth connectivity matrix was JØ=0 mV (unconnected; green), Je=1 mV (excitatory; red) or Ji=−5 mV (inhibitory; blue). A perfect reconstruction would result in delta peaks at the three synaptic strength values of the original connectivity matrix, marked with red, blue and green dashed vertical lines. The scale of the vertical axis is logarithmic, except for the first decade, which is in linear scale. b, c Distributions of the identified base rates of the neurons and weights of the self-connections approximated with histograms and Gaussian KDEs. Black dashed vertical lines mark the ground truth values which should have been recovered (c=5 s−1 and Js=−25 mV respectively)
Mentions: To assess the degree to which undersampling deteriorates the quality of the network reconstructions, we performed several experiments with different datasets, each being a subsample of the original one presented in Section 3.1. In each experiment we randomly selected a fraction of neurons (maintaining the ratio of excitatory and inhibitory neurons) that are fed into the optimizer. The results are shown in Fig. 11. In contrast to Fig. 3, here the connections were classified using k-means as described in Section 3.3, which is more robust in the undersampled cases. Therefore, for the case of N=1000 neurons the MER is slightly higher than when classified using GMM, as reported in Table 2. As expected, the MER of the partial network increased as we decreased the number of neurons that were visible to the GLM (undersampling). This was largely due to the broadening of the distribution of the synaptic weights for connections (data not shown, cf. Figure 3). Yet, in all cases, synapse classification based on the reconstruction method was substantially better than random classification of the synapses, see Appendix C.1 for the derivation of the chance level MER.Fig. 11

Bottom Line: Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities.Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups.Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

View Article: PubMed Central - PubMed

Affiliation: Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center, Jülich, Germany, yury@zaytsev.net.

ABSTRACT
Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1 % under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

No MeSH data available.