Limits...
Reconstructing stimuli from the spike times of leaky integrate and fire neurons.

Gerwinn S, Macke JH, Bethge M - Front Neurosci (2011)

Bottom Line: Reconstructing stimuli from the spike trains of neurons is an important approach for understanding the neural code.One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes.For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released.

View Article: PubMed Central - PubMed

Affiliation: Werner Reichardt Center for Integrative Neuroscience, University of Tübingen Tübingen, Germany.

ABSTRACT
Reconstructing stimuli from the spike trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the time-points at which spikes were observed, especially if these time-points are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case.

No MeSH data available.


Illustration of the encoding process. Each stimulus is constructed by first drawing random weights c from a Gaussian distribution, and then forming a weighted superposition of basis functions with these weights (top row). This generates a new smooth stimulus on each trial. In our neuron model, the membrane potential at any time is a “leaky” integral of the stimulus. Thus, the membrane potential can be calculated by convolving the stimulus with an exponential filter (first box). Whenever the resulting integrated signal (second box) reaches a predefined threshold the integration is reset (third box). Alternatively, one can apply the filtering, integrating and resetting at the given spike times to each basis function separately, and then do a weighted summation of these signals. Both constructions lead to the same membrane potential, and thus the same spike trains.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3046364&req=5

Figure 2: Illustration of the encoding process. Each stimulus is constructed by first drawing random weights c from a Gaussian distribution, and then forming a weighted superposition of basis functions with these weights (top row). This generates a new smooth stimulus on each trial. In our neuron model, the membrane potential at any time is a “leaky” integral of the stimulus. Thus, the membrane potential can be calculated by convolving the stimulus with an exponential filter (first box). Whenever the resulting integrated signal (second box) reaches a predefined threshold the integration is reset (third box). Alternatively, one can apply the filtering, integrating and resetting at the given spike times to each basis function separately, and then do a weighted summation of these signals. Both constructions lead to the same membrane potential, and thus the same spike trains.

Mentions: Given an encoding model, we can aim to “invert” this model for decoding, and thus perform optimal decoding. We assume that the encoding model is known, and, concretely, assume that it is a leaky integrate and fire neuron model (Tuckwell, 1988; Burkitt, 2006; see Box). This encoding model is illustrated on the right-hand side of Figure 2.


Reconstructing stimuli from the spike times of leaky integrate and fire neurons.

Gerwinn S, Macke JH, Bethge M - Front Neurosci (2011)

Illustration of the encoding process. Each stimulus is constructed by first drawing random weights c from a Gaussian distribution, and then forming a weighted superposition of basis functions with these weights (top row). This generates a new smooth stimulus on each trial. In our neuron model, the membrane potential at any time is a “leaky” integral of the stimulus. Thus, the membrane potential can be calculated by convolving the stimulus with an exponential filter (first box). Whenever the resulting integrated signal (second box) reaches a predefined threshold the integration is reset (third box). Alternatively, one can apply the filtering, integrating and resetting at the given spike times to each basis function separately, and then do a weighted summation of these signals. Both constructions lead to the same membrane potential, and thus the same spike trains.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3046364&req=5

Figure 2: Illustration of the encoding process. Each stimulus is constructed by first drawing random weights c from a Gaussian distribution, and then forming a weighted superposition of basis functions with these weights (top row). This generates a new smooth stimulus on each trial. In our neuron model, the membrane potential at any time is a “leaky” integral of the stimulus. Thus, the membrane potential can be calculated by convolving the stimulus with an exponential filter (first box). Whenever the resulting integrated signal (second box) reaches a predefined threshold the integration is reset (third box). Alternatively, one can apply the filtering, integrating and resetting at the given spike times to each basis function separately, and then do a weighted summation of these signals. Both constructions lead to the same membrane potential, and thus the same spike trains.
Mentions: Given an encoding model, we can aim to “invert” this model for decoding, and thus perform optimal decoding. We assume that the encoding model is known, and, concretely, assume that it is a leaky integrate and fire neuron model (Tuckwell, 1988; Burkitt, 2006; see Box). This encoding model is illustrated on the right-hand side of Figure 2.

Bottom Line: Reconstructing stimuli from the spike trains of neurons is an important approach for understanding the neural code.One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes.For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released.

View Article: PubMed Central - PubMed

Affiliation: Werner Reichardt Center for Integrative Neuroscience, University of Tübingen Tübingen, Germany.

ABSTRACT
Reconstructing stimuli from the spike trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the time-points at which spikes were observed, especially if these time-points are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case.

No MeSH data available.