Limits...
The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH
The error landscape for a neuron with two synapses and the descent on this landscape during learning.The neuron receives several input spikes on each synapse, the same as in Figs. 1 and 2, and has to fire one spike at a predefined target timing, the same as in Fig. 2. (A), (B) A contour plot of the VP and E distances between the actual spike train and the target spike train as a function of the values of the synaptic efficacies. The thick lines correspond to discontinuities of the distances. (A) VP distance. (B) E distance. (C), (D), (E) The dynamics of the synaptic efficacies according to the learning rules. The black lines represent actual trajectories of the synaptic efficacies. The vectors represent synaptic changes. The green line corresponds to the values of the synaptic efficacies for which the output corresponds to the target spike train. (C) E-learning. (D) I-learning. (E) ReSuMe.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g004: The error landscape for a neuron with two synapses and the descent on this landscape during learning.The neuron receives several input spikes on each synapse, the same as in Figs. 1 and 2, and has to fire one spike at a predefined target timing, the same as in Fig. 2. (A), (B) A contour plot of the VP and E distances between the actual spike train and the target spike train as a function of the values of the synaptic efficacies. The thick lines correspond to discontinuities of the distances. (A) VP distance. (B) E distance. (C), (D), (E) The dynamics of the synaptic efficacies according to the learning rules. The black lines represent actual trajectories of the synaptic efficacies. The vectors represent synaptic changes. The green line corresponds to the values of the synaptic efficacies for which the output corresponds to the target spike train. (C) E-learning. (D) I-learning. (E) ReSuMe.

Mentions: E-learning aims to minimize the following error function:(3)where is a positive parameter. The first sum runs over the independent actual spikes, the second sum runs over the independent target spikes, and the last sum runs over all unique pairs of matching spikes. Because the creation and deletion of spikes and changes in their classification in either or lead to discontinuous changes of (Fig. 4 B), gradient descent can only be ensured piecewisely. The synaptic changes that aim to minimize the error function through piecewise gradient descent are . By performing the derivation and after some approximations (Methods), we get the E-learning rule:(4)where is the learning rate, a positive parameter, and another positive parameter.


The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

The error landscape for a neuron with two synapses and the descent on this landscape during learning.The neuron receives several input spikes on each synapse, the same as in Figs. 1 and 2, and has to fire one spike at a predefined target timing, the same as in Fig. 2. (A), (B) A contour plot of the VP and E distances between the actual spike train and the target spike train as a function of the values of the synaptic efficacies. The thick lines correspond to discontinuities of the distances. (A) VP distance. (B) E distance. (C), (D), (E) The dynamics of the synaptic efficacies according to the learning rules. The black lines represent actual trajectories of the synaptic efficacies. The vectors represent synaptic changes. The green line corresponds to the values of the synaptic efficacies for which the output corresponds to the target spike train. (C) E-learning. (D) I-learning. (E) ReSuMe.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g004: The error landscape for a neuron with two synapses and the descent on this landscape during learning.The neuron receives several input spikes on each synapse, the same as in Figs. 1 and 2, and has to fire one spike at a predefined target timing, the same as in Fig. 2. (A), (B) A contour plot of the VP and E distances between the actual spike train and the target spike train as a function of the values of the synaptic efficacies. The thick lines correspond to discontinuities of the distances. (A) VP distance. (B) E distance. (C), (D), (E) The dynamics of the synaptic efficacies according to the learning rules. The black lines represent actual trajectories of the synaptic efficacies. The vectors represent synaptic changes. The green line corresponds to the values of the synaptic efficacies for which the output corresponds to the target spike train. (C) E-learning. (D) I-learning. (E) ReSuMe.
Mentions: E-learning aims to minimize the following error function:(3)where is a positive parameter. The first sum runs over the independent actual spikes, the second sum runs over the independent target spikes, and the last sum runs over all unique pairs of matching spikes. Because the creation and deletion of spikes and changes in their classification in either or lead to discontinuous changes of (Fig. 4 B), gradient descent can only be ensured piecewisely. The synaptic changes that aim to minimize the error function through piecewise gradient descent are . By performing the derivation and after some approximations (Methods), we get the E-learning rule:(4)where is the learning rate, a positive parameter, and another positive parameter.

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH