Limits...
Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity.

Zaytsev YV, Morrison A, Deger M - J Comput Neurosci (2015)

Bottom Line: Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities.Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups.Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

View Article: PubMed Central - PubMed

Affiliation: Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center, Jülich, Germany, yury@zaytsev.net.

ABSTRACT
Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1 % under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

No MeSH data available.


Related in: MedlinePlus

Cross-validation for the membrane time constant τ and transmission delay d. Log-likelihood  computed on the validation dataset, using parameters estimated from the training dataset for different values of parameters τ and d. For each trial,  has been rescaled according to . The redstar marks the average of the horizontal location of the peaks of all curves in the plot. Each panel shows Ns=75 parameter scans for Ne=60 excitatory and Ni=15 inhibitory neurons, randomly selected from the complete recording of N=1000 neurons. a, b Cross-validation for τ using fixed values of d=1.5 ms (standard deviation σ=1.14 ms) and d=1.7 ms (σ=1.02 ms). c, d Cross-validation for d using fixed values of τ=20 ms (σ=0.04ms) and τ=10 ms (σ=0.01 ms)
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4493949&req=5

Fig4: Cross-validation for the membrane time constant τ and transmission delay d. Log-likelihood computed on the validation dataset, using parameters estimated from the training dataset for different values of parameters τ and d. For each trial, has been rescaled according to . The redstar marks the average of the horizontal location of the peaks of all curves in the plot. Each panel shows Ns=75 parameter scans for Ne=60 excitatory and Ni=15 inhibitory neurons, randomly selected from the complete recording of N=1000 neurons. a, b Cross-validation for τ using fixed values of d=1.5 ms (standard deviation σ=1.14 ms) and d=1.7 ms (σ=1.02 ms). c, d Cross-validation for d using fixed values of τ=20 ms (σ=0.04ms) and τ=10 ms (σ=0.01 ms)

Mentions: In order to recover the GLM parameters τ and d for this experiment, we applied a cross-validation procedure. It is important to note that we are not expecting to obtain exactly τ=τm=20 ms and d=1.5 ms due to mismatch between the LIF with α-shaped PSCs and GLM with exponential PSPs models. Instead, we want to recover the optimal parameters τ and d for the GLM model to produce most similar dynamics to the recorded spike trains from the LIF model. We split the available data into a training and a validation dataset, and performed reconstructions for a subset of Ns=75 neurons on the training dataset varying one parameter, while keeping the other one fixed. The resulting parameter estimates 𝜃i were then used to calculate the log-likelihood function on the validation dataset. Two different datasets (training and validation) were used in order to ensure that the chosen values of the parameters generalize, and are not specific to the training sample. The validation curves are shown in Fig. 4a, c (the curves for the training dataset look identical); note that they all have an easily identifiable maximum. Subsequently, we averaged the locations of the maxima for all trials and performed another cross-validation run (Fig. 4d, b) for updated values of the parameters. Repeating this procedure of alternatively fixing one parameter and performing cross-validation for another one would lead us to a local extremum in the (τ,d) parameter space. However, we opted to stop after only a few iterations because the procedure is computationally expensive, and in order to asses if a sub-optimal choice of τ=10 ms and d=1.7 ms would lead to acceptable estimation results.Fig. 4


Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity.

Zaytsev YV, Morrison A, Deger M - J Comput Neurosci (2015)

Cross-validation for the membrane time constant τ and transmission delay d. Log-likelihood  computed on the validation dataset, using parameters estimated from the training dataset for different values of parameters τ and d. For each trial,  has been rescaled according to . The redstar marks the average of the horizontal location of the peaks of all curves in the plot. Each panel shows Ns=75 parameter scans for Ne=60 excitatory and Ni=15 inhibitory neurons, randomly selected from the complete recording of N=1000 neurons. a, b Cross-validation for τ using fixed values of d=1.5 ms (standard deviation σ=1.14 ms) and d=1.7 ms (σ=1.02 ms). c, d Cross-validation for d using fixed values of τ=20 ms (σ=0.04ms) and τ=10 ms (σ=0.01 ms)
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4493949&req=5

Fig4: Cross-validation for the membrane time constant τ and transmission delay d. Log-likelihood computed on the validation dataset, using parameters estimated from the training dataset for different values of parameters τ and d. For each trial, has been rescaled according to . The redstar marks the average of the horizontal location of the peaks of all curves in the plot. Each panel shows Ns=75 parameter scans for Ne=60 excitatory and Ni=15 inhibitory neurons, randomly selected from the complete recording of N=1000 neurons. a, b Cross-validation for τ using fixed values of d=1.5 ms (standard deviation σ=1.14 ms) and d=1.7 ms (σ=1.02 ms). c, d Cross-validation for d using fixed values of τ=20 ms (σ=0.04ms) and τ=10 ms (σ=0.01 ms)
Mentions: In order to recover the GLM parameters τ and d for this experiment, we applied a cross-validation procedure. It is important to note that we are not expecting to obtain exactly τ=τm=20 ms and d=1.5 ms due to mismatch between the LIF with α-shaped PSCs and GLM with exponential PSPs models. Instead, we want to recover the optimal parameters τ and d for the GLM model to produce most similar dynamics to the recorded spike trains from the LIF model. We split the available data into a training and a validation dataset, and performed reconstructions for a subset of Ns=75 neurons on the training dataset varying one parameter, while keeping the other one fixed. The resulting parameter estimates 𝜃i were then used to calculate the log-likelihood function on the validation dataset. Two different datasets (training and validation) were used in order to ensure that the chosen values of the parameters generalize, and are not specific to the training sample. The validation curves are shown in Fig. 4a, c (the curves for the training dataset look identical); note that they all have an easily identifiable maximum. Subsequently, we averaged the locations of the maxima for all trials and performed another cross-validation run (Fig. 4d, b) for updated values of the parameters. Repeating this procedure of alternatively fixing one parameter and performing cross-validation for another one would lead us to a local extremum in the (τ,d) parameter space. However, we opted to stop after only a few iterations because the procedure is computationally expensive, and in order to asses if a sub-optimal choice of τ=10 ms and d=1.7 ms would lead to acceptable estimation results.Fig. 4

Bottom Line: Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities.Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups.Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

View Article: PubMed Central - PubMed

Affiliation: Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center, Jülich, Germany, yury@zaytsev.net.

ABSTRACT
Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1 % under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.

No MeSH data available.


Related in: MedlinePlus