Limits...
Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity.

Zaytsev YV, Morrison A, Deger M - J Comput Neurosci (2015)

Bottom Line: We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters.Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups.Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

View Article: PubMed Central - PubMed

Affiliation: Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center, Jülich, Germany, yury@zaytsev.net.

ABSTRACT
Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1 % under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

No MeSH data available.


Related in: MedlinePlus

Schematic of the point process generalized linear model (PP GLM) of a recurrent spiking neuronal network. In this model, the spike trains  from the neurons in the network, after incurring transmission delays dij, pass through a linear filtering stage Ki. The resulting (pseudo) membrane potential Ui(t) is fed into a nonlinear link function , which transforms it into the conditional intensity function λi(t). The latter drives the probabilistic spiking mechanism that generates an output spike train Si for the i-th neuron. Note that this spike train is then also fed back as an input to the neuron itself via a “self-connection” in order to model its refractory, post-spike properties
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4493949&req=5

Fig1: Schematic of the point process generalized linear model (PP GLM) of a recurrent spiking neuronal network. In this model, the spike trains from the neurons in the network, after incurring transmission delays dij, pass through a linear filtering stage Ki. The resulting (pseudo) membrane potential Ui(t) is fed into a nonlinear link function , which transforms it into the conditional intensity function λi(t). The latter drives the probabilistic spiking mechanism that generates an output spike train Si for the i-th neuron. Note that this spike train is then also fed back as an input to the neuron itself via a “self-connection” in order to model its refractory, post-spike properties

Mentions: The activity of the individual nerve cells can be characterized by a stochastic GLM that postulates that two consecutive operations are performed by the neuron on its input. First, the dimensionality of the observable signal is reduced by means of a linear transformation Ki. This transformation models synaptic and dendritic filtering, input summation and leaky integration in the soma. The result is a one-dimensional quantity that is analogous to the membrane potential of a point neuron model. Second, this transformed one-dimensional signal is fed into a nonlinear probabilistic spiking mechanism, which works by sampling from an inhomogeneous Poisson process with an instantaneous rate (conditional intensity function) given by . Here, fi(⋅) is a function that captures the nonlinear properties of the neuron. Both the linear filter Ki and the nonlinearity fi are specified by 𝜃i, a set of parameters that describes the characteristics of the i-th neuron. The schematic of this model is shown in Fig. 1.Fig. 1


Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity.

Zaytsev YV, Morrison A, Deger M - J Comput Neurosci (2015)

Schematic of the point process generalized linear model (PP GLM) of a recurrent spiking neuronal network. In this model, the spike trains  from the neurons in the network, after incurring transmission delays dij, pass through a linear filtering stage Ki. The resulting (pseudo) membrane potential Ui(t) is fed into a nonlinear link function , which transforms it into the conditional intensity function λi(t). The latter drives the probabilistic spiking mechanism that generates an output spike train Si for the i-th neuron. Note that this spike train is then also fed back as an input to the neuron itself via a “self-connection” in order to model its refractory, post-spike properties
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4493949&req=5

Fig1: Schematic of the point process generalized linear model (PP GLM) of a recurrent spiking neuronal network. In this model, the spike trains from the neurons in the network, after incurring transmission delays dij, pass through a linear filtering stage Ki. The resulting (pseudo) membrane potential Ui(t) is fed into a nonlinear link function , which transforms it into the conditional intensity function λi(t). The latter drives the probabilistic spiking mechanism that generates an output spike train Si for the i-th neuron. Note that this spike train is then also fed back as an input to the neuron itself via a “self-connection” in order to model its refractory, post-spike properties
Mentions: The activity of the individual nerve cells can be characterized by a stochastic GLM that postulates that two consecutive operations are performed by the neuron on its input. First, the dimensionality of the observable signal is reduced by means of a linear transformation Ki. This transformation models synaptic and dendritic filtering, input summation and leaky integration in the soma. The result is a one-dimensional quantity that is analogous to the membrane potential of a point neuron model. Second, this transformed one-dimensional signal is fed into a nonlinear probabilistic spiking mechanism, which works by sampling from an inhomogeneous Poisson process with an instantaneous rate (conditional intensity function) given by . Here, fi(⋅) is a function that captures the nonlinear properties of the neuron. Both the linear filter Ki and the nonlinearity fi are specified by 𝜃i, a set of parameters that describes the characteristics of the i-th neuron. The schematic of this model is shown in Fig. 1.Fig. 1

Bottom Line: We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters.Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups.Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

View Article: PubMed Central - PubMed

Affiliation: Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center, Jülich, Germany, yury@zaytsev.net.

ABSTRACT
Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1 % under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

No MeSH data available.


Related in: MedlinePlus