Limits...
The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH
The distribution of the synaptic efficacies, before and after learning, for the experiments presented in Fig. 7.(A) Before learning. (B)–(E) After 400 learning epochs. (B), (D) E-learning. (C), (E) I-learning. (B), (C) No jitter. (D), (E) A gaussian jitter with an amplitude of 5 ms is applied to the inputs.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g008: The distribution of the synaptic efficacies, before and after learning, for the experiments presented in Fig. 7.(A) Before learning. (B)–(E) After 400 learning epochs. (B), (D) E-learning. (C), (E) I-learning. (B), (C) No jitter. (D), (E) A gaussian jitter with an amplitude of 5 ms is applied to the inputs.

Mentions: Fig. 8 presents the distribution of the synaptic efficacies, before and after learning, for the experiments presented in Fig. 7. This distribution has been computed over the 10,000 realizations of the experiments. With I-learning, all synapses stay excitatory, like they were generated initially, although a significant fraction of them become close to zero, after learning. E-learning allows synapses to change sign. When the input is subject to jitter, the synaptic distributions after learning become broader than in the case of no jitter.


The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

The distribution of the synaptic efficacies, before and after learning, for the experiments presented in Fig. 7.(A) Before learning. (B)–(E) After 400 learning epochs. (B), (D) E-learning. (C), (E) I-learning. (B), (C) No jitter. (D), (E) A gaussian jitter with an amplitude of 5 ms is applied to the inputs.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g008: The distribution of the synaptic efficacies, before and after learning, for the experiments presented in Fig. 7.(A) Before learning. (B)–(E) After 400 learning epochs. (B), (D) E-learning. (C), (E) I-learning. (B), (C) No jitter. (D), (E) A gaussian jitter with an amplitude of 5 ms is applied to the inputs.
Mentions: Fig. 8 presents the distribution of the synaptic efficacies, before and after learning, for the experiments presented in Fig. 7. This distribution has been computed over the 10,000 realizations of the experiments. With I-learning, all synapses stay excitatory, like they were generated initially, although a significant fraction of them become close to zero, after learning. E-learning allows synapses to change sign. When the input is subject to jitter, the synaptic distributions after learning become broader than in the case of no jitter.

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH