Limits...
The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH
A graphical illustration of the chronotron problem for a neuron with 2 synapses (continued).As in Fig. 1, but for other values of , resulted through the application of E-learning, starting from the situation in Fig. 1, and having as a target the generation of one spike at 75 ms. Left: during learning. Right: after learning converged.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g002: A graphical illustration of the chronotron problem for a neuron with 2 synapses (continued).As in Fig. 1, but for other values of , resulted through the application of E-learning, starting from the situation in Fig. 1, and having as a target the generation of one spike at 75 ms. Left: during learning. Right: after learning converged.

Mentions: The chronotron problem can be illustrated graphically by considering a space having the same number of dimensions as the number of afferent synapses of the neuron. In this space, the synaptic efficacies define a vector and the normalized PSPs define a vector . The vector moves around this space, in time, according to the dynamics of the PSPs, while changes on much larger timescales than . The neuron fires when touches a hyperplane that is perpendicular on and at a distance of the origin (Methods). After firing, the PSPs are reset to 0 and thus the trajectories of always start from the origin. This is illustrated in Figs. 1 and 2 for a neuron with 2 synapses and in Fig. 3 for a neuron with 3 synapses. The chronotron problem can be understood as the problem of setting the spike-generating hyperplane, by changing , such that it intersects the trajectory of at exactly those timings when we want spikes to be fired. This problem is very similar to the problem that needs to be solved in reservoir computing [22]–[24], where the state of a high-dimensional dynamical system, such as our vector , is processed by a (usually) linear discriminator such that the switch between output states (the crossing of the hyperplane defined by the linear discriminator) happens at desired moments of time.


The chronotron: a neuron that learns to fire temporally precise spike patterns.

Florian RV - PLoS ONE (2012)

A graphical illustration of the chronotron problem for a neuron with 2 synapses (continued).As in Fig. 1, but for other values of , resulted through the application of E-learning, starting from the situation in Fig. 1, and having as a target the generation of one spike at 75 ms. Left: during learning. Right: after learning converged.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3412872&req=5

pone-0040233-g002: A graphical illustration of the chronotron problem for a neuron with 2 synapses (continued).As in Fig. 1, but for other values of , resulted through the application of E-learning, starting from the situation in Fig. 1, and having as a target the generation of one spike at 75 ms. Left: during learning. Right: after learning converged.
Mentions: The chronotron problem can be illustrated graphically by considering a space having the same number of dimensions as the number of afferent synapses of the neuron. In this space, the synaptic efficacies define a vector and the normalized PSPs define a vector . The vector moves around this space, in time, according to the dynamics of the PSPs, while changes on much larger timescales than . The neuron fires when touches a hyperplane that is perpendicular on and at a distance of the origin (Methods). After firing, the PSPs are reset to 0 and thus the trajectories of always start from the origin. This is illustrated in Figs. 1 and 2 for a neuron with 2 synapses and in Fig. 3 for a neuron with 3 synapses. The chronotron problem can be understood as the problem of setting the spike-generating hyperplane, by changing , such that it intersects the trajectory of at exactly those timings when we want spikes to be fired. This problem is very similar to the problem that needs to be solved in reservoir computing [22]–[24], where the state of a high-dimensional dynamical system, such as our vector , is processed by a (usually) linear discriminator such that the switch between output states (the crossing of the hyperplane defined by the linear discriminator) happens at desired moments of time.

Bottom Line: When the input is noisy, the classification also leads to noise reduction.The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation.Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

View Article: PubMed Central - PubMed

Affiliation: Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania. florian@rist.ro

ABSTRACT
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

Show MeSH