Limits...
The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH
Learning of a mapping between 10 input patterns, with and without jitter, and one output spike train.Left: The VP distance between the actual and the target output spike train. Center: The timing difference  between matching spikes and the target spikes. Right: The probability  that the fired spikes matched the target ones. The graphs represent averages and standard deviations over input patterns and over 10,000 realizations. (A)–(D): Evolution during learning, as a function of the learning epoch. (A), (B): No jitter. (C), (D): A gaussian jitter with an amplitude of 5 ms is added to each presentation of the input patterns. (E), (F): Values after 400 learning epochs, as a function of the amplitude of the input jitter. (A), (C), (E): E-learning. (B), (D), (F): I-learning. The inputs and the trial length are as in Fig. 6. The target output spike train consists of one spike at 100 ms.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g007: Learning of a mapping between 10 input patterns, with and without jitter, and one output spike train.Left: The VP distance between the actual and the target output spike train. Center: The timing difference between matching spikes and the target spikes. Right: The probability that the fired spikes matched the target ones. The graphs represent averages and standard deviations over input patterns and over 10,000 realizations. (A)–(D): Evolution during learning, as a function of the learning epoch. (A), (B): No jitter. (C), (D): A gaussian jitter with an amplitude of 5 ms is added to each presentation of the input patterns. (E), (F): Values after 400 learning epochs, as a function of the amplitude of the input jitter. (A), (C), (E): E-learning. (B), (D), (F): I-learning. The inputs and the trial length are as in Fig. 6. The target output spike train consists of one spike at 100 ms.

Mentions: Fig. 7 illustrates learning of a mapping between 10 different input patterns and one output spike train consisting of one spike at the middle of the trial interval. The neuron learns to perform this mapping, for all 10 input patterns, using the same set of synaptic efficacies. For example, for E-learning, in 99.9% of 10,000 realizations, the neuron was able to fire the correct number of spikes (one spike) and the spike had an average timing difference of less than 0.03 ms with respect to the timing of the target spike, after about 8 minutes of learning (simulated time; 241 learning epochs). In 95% of realizations, the average timing error was less than 1 ms after 1.6 minutes of learning (48 learning epochs). Learning worked even when the inputs were jittered, i.e. at each trial, input spikes were displaced around the reference timing according to a gaussian distribution. For example, in the same conditions as before but with an input jittered with a 5 ms amplitude, in more than 95% of the realizations, the neuron fired one spike with an average timing error of less than 2 ms, after about 8 minutes of learning (225 epochs). A 5 ms gaussian jitter amplitude corresponds to a 3.99 ms average timing displacement of the input spikes (Methods), so, in this case, the mapping also led to noise reduction, by doubling the precision of spike timing.


The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

Learning of a mapping between 10 input patterns, with and without jitter, and one output spike train.Left: The VP distance between the actual and the target output spike train. Center: The timing difference  between matching spikes and the target spikes. Right: The probability  that the fired spikes matched the target ones. The graphs represent averages and standard deviations over input patterns and over 10,000 realizations. (A)–(D): Evolution during learning, as a function of the learning epoch. (A), (B): No jitter. (C), (D): A gaussian jitter with an amplitude of 5 ms is added to each presentation of the input patterns. (E), (F): Values after 400 learning epochs, as a function of the amplitude of the input jitter. (A), (C), (E): E-learning. (B), (D), (F): I-learning. The inputs and the trial length are as in Fig. 6. The target output spike train consists of one spike at 100 ms.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g007: Learning of a mapping between 10 input patterns, with and without jitter, and one output spike train.Left: The VP distance between the actual and the target output spike train. Center: The timing difference between matching spikes and the target spikes. Right: The probability that the fired spikes matched the target ones. The graphs represent averages and standard deviations over input patterns and over 10,000 realizations. (A)–(D): Evolution during learning, as a function of the learning epoch. (A), (B): No jitter. (C), (D): A gaussian jitter with an amplitude of 5 ms is added to each presentation of the input patterns. (E), (F): Values after 400 learning epochs, as a function of the amplitude of the input jitter. (A), (C), (E): E-learning. (B), (D), (F): I-learning. The inputs and the trial length are as in Fig. 6. The target output spike train consists of one spike at 100 ms.
Mentions: Fig. 7 illustrates learning of a mapping between 10 different input patterns and one output spike train consisting of one spike at the middle of the trial interval. The neuron learns to perform this mapping, for all 10 input patterns, using the same set of synaptic efficacies. For example, for E-learning, in 99.9% of 10,000 realizations, the neuron was able to fire the correct number of spikes (one spike) and the spike had an average timing difference of less than 0.03 ms with respect to the timing of the target spike, after about 8 minutes of learning (simulated time; 241 learning epochs). In 95% of realizations, the average timing error was less than 1 ms after 1.6 minutes of learning (48 learning epochs). Learning worked even when the inputs were jittered, i.e. at each trial, input spikes were displaced around the reference timing according to a gaussian distribution. For example, in the same conditions as before but with an input jittered with a 5 ms amplitude, in more than 95% of the realizations, the neuron fired one spike with an average timing error of less than 2 ms, after about 8 minutes of learning (225 epochs). A 5 ms gaussian jitter amplitude corresponds to a 3.99 ms average timing displacement of the input spikes (Methods), so, in this case, the mapping also led to noise reduction, by doubling the precision of spike timing.

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH