Limits...
The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH

Related in: MedlinePlus

The dependence on the number of categories of the performance of E-learning for a classification problem. (A) The average minimum number of epochs required for correct learning, as a function of the load , for various numbers of categories . Regardless of , the points fall on the same curve. (B) The maximum load for which correct learning is achieved (the capacity ), as a function of the number of categories . The shaded area represents the uncertainty due to the fact that the load can vary only discretely, in steps of , for a particular . The capacity is approximately constant for all .
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g010: The dependence on the number of categories of the performance of E-learning for a classification problem. (A) The average minimum number of epochs required for correct learning, as a function of the load , for various numbers of categories . Regardless of , the points fall on the same curve. (B) The maximum load for which correct learning is achieved (the capacity ), as a function of the number of categories . The shaded area represents the uncertainty due to the fact that the load can vary only discretely, in steps of , for a particular . The capacity is approximately constant for all .

Mentions: The load and the capacity have been used to characterize neurons with binary outputs, which memorize one bit of information for every pattern. The chronotron can classify inputs in more than one category, and for categories it memorizes bits of information for every input pattern. Therefore, a better measure for the chronotron's learning ability is the information load and the corresponding information capacity , equal to the maximum information load. The number of categories into which a chronotron with latency coding of information classifies its inputs is limited only by the temporal precision of the output spike. For example, if this temporal precision is 1 ms, with the particular setup presented here, the chronotron can encode up to about categories (Methods). When information is encoded in the spike latencies, the simulations showed that the chronotron's capacity does not depend on the number of categories (Fig. 10). The maximum information capacity of the chronotron, for E-learning and the particular setup that we used, can be then computed as bits per synapse (Methods). Extrapolating, this means that a chronotron with about 10,000 input synapses would be able to fire a spike at the correct timing, with a 1 ms precision, among up to 80 possible ones, for about 2,200 different, random input patterns, and thus to memorize about 13.9 kilobits of information. The information capacity of the perceptron is 2 bits per synapse and the one of the tempotron is around 3 bits per synapse [16]. However, if more than two input categories have to be discriminated, the chronotron has the advantage of being able to carry computations that need multiple perceptrons or tempotrons to be performed, being thus more efficient. Unlike the tempotron, the chronotron uses the same coding of information for both inputs and outputs and is therefore able to interact with other chronotrons.


The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

The dependence on the number of categories of the performance of E-learning for a classification problem. (A) The average minimum number of epochs required for correct learning, as a function of the load , for various numbers of categories . Regardless of , the points fall on the same curve. (B) The maximum load for which correct learning is achieved (the capacity ), as a function of the number of categories . The shaded area represents the uncertainty due to the fact that the load can vary only discretely, in steps of , for a particular . The capacity is approximately constant for all .
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g010: The dependence on the number of categories of the performance of E-learning for a classification problem. (A) The average minimum number of epochs required for correct learning, as a function of the load , for various numbers of categories . Regardless of , the points fall on the same curve. (B) The maximum load for which correct learning is achieved (the capacity ), as a function of the number of categories . The shaded area represents the uncertainty due to the fact that the load can vary only discretely, in steps of , for a particular . The capacity is approximately constant for all .
Mentions: The load and the capacity have been used to characterize neurons with binary outputs, which memorize one bit of information for every pattern. The chronotron can classify inputs in more than one category, and for categories it memorizes bits of information for every input pattern. Therefore, a better measure for the chronotron's learning ability is the information load and the corresponding information capacity , equal to the maximum information load. The number of categories into which a chronotron with latency coding of information classifies its inputs is limited only by the temporal precision of the output spike. For example, if this temporal precision is 1 ms, with the particular setup presented here, the chronotron can encode up to about categories (Methods). When information is encoded in the spike latencies, the simulations showed that the chronotron's capacity does not depend on the number of categories (Fig. 10). The maximum information capacity of the chronotron, for E-learning and the particular setup that we used, can be then computed as bits per synapse (Methods). Extrapolating, this means that a chronotron with about 10,000 input synapses would be able to fire a spike at the correct timing, with a 1 ms precision, among up to 80 possible ones, for about 2,200 different, random input patterns, and thus to memorize about 13.9 kilobits of information. The information capacity of the perceptron is 2 bits per synapse and the one of the tempotron is around 3 bits per synapse [16]. However, if more than two input categories have to be discriminated, the chronotron has the advantage of being able to carry computations that need multiple perceptrons or tempotrons to be performed, being thus more efficient. Unlike the tempotron, the chronotron uses the same coding of information for both inputs and outputs and is therefore able to interact with other chronotrons.

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH
Related in: MedlinePlus