Limits...
The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH

Related in: MedlinePlus

The performance of the chronotron learning rules for a classification problem.The input patterns are classified into 3 classes. (A)–(C) The average minimum number of epochs required for correct learning is displayed as a function of the load , for various values of the number of input synapses . Note the scale differences. (A) E-learning. (B) I-learning. (C) ReSuMe. (D) The maximum load for which correct learning can be achieved (the capacity ), as a function of . E-learning has a much better performance than I-learning or ReSuMe. For E-learning, simulations for higher  were not performed because of the high computational cost, due to the high capacity resulted through this learning rule. Averages were computed over 500 realizations with different, random initial conditions.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g009: The performance of the chronotron learning rules for a classification problem.The input patterns are classified into 3 classes. (A)–(C) The average minimum number of epochs required for correct learning is displayed as a function of the load , for various values of the number of input synapses . Note the scale differences. (A) E-learning. (B) I-learning. (C) ReSuMe. (D) The maximum load for which correct learning can be achieved (the capacity ), as a function of . E-learning has a much better performance than I-learning or ReSuMe. For E-learning, simulations for higher were not performed because of the high computational cost, due to the high capacity resulted through this learning rule. Averages were computed over 500 realizations with different, random initial conditions.

Mentions: The ability of neurons to memorize mappings corresponding to classification tasks increases with the number of input synapses . The ratio (the number of input patterns memorized per input synapse) represents the load imposed by the task on the neuron. A characteristic of the neuron's ability to learn is the maximum load for which the mappings are performed correctly [16], which we call the capacity of the neuron. We considered that the chronotron had a correct output when target spikes were reproduced with a 1 ms precision, which corresponds to the lower end of the 0.15–5 ms range of the precision of spikes observed in several areas of the brain [27]–[34]. In our setup, in both input and target output spike trains there was one spike per trial and information was encoded in the spike latencies. Except where specified, the input spike trains consisted, for each of the synapses, of one spike generated at a random timing, distributed uniformly, and the target spike train for each category consisted of one spike at (Methods). Fig. 9 illustrates the performance of the chronotron in simulations where inputs were classified into categories. For the particular studied setup, both I-learning and ReSuMe led to a capacity between 0.02 and 0.04, while E-learning led to a capacity up to patterns per synapse.


The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

The performance of the chronotron learning rules for a classification problem.The input patterns are classified into 3 classes. (A)–(C) The average minimum number of epochs required for correct learning is displayed as a function of the load , for various values of the number of input synapses . Note the scale differences. (A) E-learning. (B) I-learning. (C) ReSuMe. (D) The maximum load for which correct learning can be achieved (the capacity ), as a function of . E-learning has a much better performance than I-learning or ReSuMe. For E-learning, simulations for higher  were not performed because of the high computational cost, due to the high capacity resulted through this learning rule. Averages were computed over 500 realizations with different, random initial conditions.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g009: The performance of the chronotron learning rules for a classification problem.The input patterns are classified into 3 classes. (A)–(C) The average minimum number of epochs required for correct learning is displayed as a function of the load , for various values of the number of input synapses . Note the scale differences. (A) E-learning. (B) I-learning. (C) ReSuMe. (D) The maximum load for which correct learning can be achieved (the capacity ), as a function of . E-learning has a much better performance than I-learning or ReSuMe. For E-learning, simulations for higher were not performed because of the high computational cost, due to the high capacity resulted through this learning rule. Averages were computed over 500 realizations with different, random initial conditions.
Mentions: The ability of neurons to memorize mappings corresponding to classification tasks increases with the number of input synapses . The ratio (the number of input patterns memorized per input synapse) represents the load imposed by the task on the neuron. A characteristic of the neuron's ability to learn is the maximum load for which the mappings are performed correctly [16], which we call the capacity of the neuron. We considered that the chronotron had a correct output when target spikes were reproduced with a 1 ms precision, which corresponds to the lower end of the 0.15–5 ms range of the precision of spikes observed in several areas of the brain [27]–[34]. In our setup, in both input and target output spike trains there was one spike per trial and information was encoded in the spike latencies. Except where specified, the input spike trains consisted, for each of the synapses, of one spike generated at a random timing, distributed uniformly, and the target spike train for each category consisted of one spike at (Methods). Fig. 9 illustrates the performance of the chronotron in simulations where inputs were classified into categories. For the particular studied setup, both I-learning and ReSuMe led to a capacity between 0.02 and 0.04, while E-learning led to a capacity up to patterns per synapse.

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH
Related in: MedlinePlus