Limits...
Prospective Coding by Spiking Neurons.

Brea J, Gaál AT, Urbanczik R, Senn W - PLoS Comput. Biol. (2016)

Bottom Line: Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned.We discuss the potential role of the learning mechanism in classical trace conditioning.In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ).

View Article: PubMed Central - PubMed

Affiliation: Department of Physiology, University of Bern, Bern, Switzerland.

ABSTRACT
Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron's firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ).

No MeSH data available.


A learning mechanism that leads to prospective coding.A The signal to be predicted (target input) originates from the green neuron and depolarizes the black neuron (gray trace) such that it spikes (black lines). The synaptic connection between a blue neuron and the black neuron is strengthened if pre- and postsynaptic spikes lie within the red plasticity window of potentiation, which is slightly broader than a typical postsynaptic potential. B Due to the strengthened connection (red circle), the black neuron spikes already before the target input arrives. Since earlier presynaptic spikes now also lie within the potentiating plasticity window, the activity of the black neuron will be anticipated earlier, giving rise to prospective coding. C A spiking neuron receives input through plastic dendritic synapses with strengths wi and an input I through static (i.e. non-plastic) synapses. The somatic membrane potential U is well approximated by the sum of attenuated dendritic input  and attenuated somatic input U*.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4920376&req=5

pcbi.1005003.g001: A learning mechanism that leads to prospective coding.A The signal to be predicted (target input) originates from the green neuron and depolarizes the black neuron (gray trace) such that it spikes (black lines). The synaptic connection between a blue neuron and the black neuron is strengthened if pre- and postsynaptic spikes lie within the red plasticity window of potentiation, which is slightly broader than a typical postsynaptic potential. B Due to the strengthened connection (red circle), the black neuron spikes already before the target input arrives. Since earlier presynaptic spikes now also lie within the potentiating plasticity window, the activity of the black neuron will be anticipated earlier, giving rise to prospective coding. C A spiking neuron receives input through plastic dendritic synapses with strengths wi and an input I through static (i.e. non-plastic) synapses. The somatic membrane potential U is well approximated by the sum of attenuated dendritic input and attenuated somatic input U*.

Mentions: Before defining the learning rule in detail, we provide an intuitive description. In a neuron with both static synapses (green connection in Fig 1A and 1B) and plastic synapses (blue in Fig 1A and 1B), we propose a learning mechanism for the plastic synapses that relies on two basic ingredients: spike-timing dependent synaptic potentiation and balancing synaptic depression. The synaptic connections are strengthened if a presynaptic spike is followed by a postsynaptic spike within a ‘plasticity window of potentiation’ (red in Fig 1A and 1B). The size of this plasticity window turns out to have a strong influence on the timing of spikes that are caused by strengthened dendritic synapses. If the plasticity window has the same shape as a postsynaptic potential (PSP), learned spikes are fired at roughly the same time as target spikes [16–18]. But if the plasticity window is slightly longer than the postsynaptic potential, learned spikes tend to be fired earlier than target spikes. More precisely, because of the slightly wider plasticity window of potentiation, presynaptic spikes may elicit postsynaptic spikes through newly strengthened connections (thick blue arrow in Fig 1B) even before the onset of the input through static synapses. These earlier postsynaptic spikes allow to strengthen the input of presynaptic neurons that spike even earlier. We refer to this as the bootstrapping effect of predicting the own predictions. As a result, a postsynaptic activity induced by the input through static synapses will be preceded by an activity ramp produced by appropriately tuned dendritic input. The neuron learns a prospective code that predicts an upcoming event.


Prospective Coding by Spiking Neurons.

Brea J, Gaál AT, Urbanczik R, Senn W - PLoS Comput. Biol. (2016)

A learning mechanism that leads to prospective coding.A The signal to be predicted (target input) originates from the green neuron and depolarizes the black neuron (gray trace) such that it spikes (black lines). The synaptic connection between a blue neuron and the black neuron is strengthened if pre- and postsynaptic spikes lie within the red plasticity window of potentiation, which is slightly broader than a typical postsynaptic potential. B Due to the strengthened connection (red circle), the black neuron spikes already before the target input arrives. Since earlier presynaptic spikes now also lie within the potentiating plasticity window, the activity of the black neuron will be anticipated earlier, giving rise to prospective coding. C A spiking neuron receives input through plastic dendritic synapses with strengths wi and an input I through static (i.e. non-plastic) synapses. The somatic membrane potential U is well approximated by the sum of attenuated dendritic input  and attenuated somatic input U*.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4920376&req=5

pcbi.1005003.g001: A learning mechanism that leads to prospective coding.A The signal to be predicted (target input) originates from the green neuron and depolarizes the black neuron (gray trace) such that it spikes (black lines). The synaptic connection between a blue neuron and the black neuron is strengthened if pre- and postsynaptic spikes lie within the red plasticity window of potentiation, which is slightly broader than a typical postsynaptic potential. B Due to the strengthened connection (red circle), the black neuron spikes already before the target input arrives. Since earlier presynaptic spikes now also lie within the potentiating plasticity window, the activity of the black neuron will be anticipated earlier, giving rise to prospective coding. C A spiking neuron receives input through plastic dendritic synapses with strengths wi and an input I through static (i.e. non-plastic) synapses. The somatic membrane potential U is well approximated by the sum of attenuated dendritic input and attenuated somatic input U*.
Mentions: Before defining the learning rule in detail, we provide an intuitive description. In a neuron with both static synapses (green connection in Fig 1A and 1B) and plastic synapses (blue in Fig 1A and 1B), we propose a learning mechanism for the plastic synapses that relies on two basic ingredients: spike-timing dependent synaptic potentiation and balancing synaptic depression. The synaptic connections are strengthened if a presynaptic spike is followed by a postsynaptic spike within a ‘plasticity window of potentiation’ (red in Fig 1A and 1B). The size of this plasticity window turns out to have a strong influence on the timing of spikes that are caused by strengthened dendritic synapses. If the plasticity window has the same shape as a postsynaptic potential (PSP), learned spikes are fired at roughly the same time as target spikes [16–18]. But if the plasticity window is slightly longer than the postsynaptic potential, learned spikes tend to be fired earlier than target spikes. More precisely, because of the slightly wider plasticity window of potentiation, presynaptic spikes may elicit postsynaptic spikes through newly strengthened connections (thick blue arrow in Fig 1B) even before the onset of the input through static synapses. These earlier postsynaptic spikes allow to strengthen the input of presynaptic neurons that spike even earlier. We refer to this as the bootstrapping effect of predicting the own predictions. As a result, a postsynaptic activity induced by the input through static synapses will be preceded by an activity ramp produced by appropriately tuned dendritic input. The neuron learns a prospective code that predicts an upcoming event.

Bottom Line: Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned.We discuss the potential role of the learning mechanism in classical trace conditioning.In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ).

View Article: PubMed Central - PubMed

Affiliation: Department of Physiology, University of Bern, Bern, Switzerland.

ABSTRACT
Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron's firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ).

No MeSH data available.