Limits...
Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K - Front Comput Neurosci (2015)

Bottom Line: Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level.As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains.Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

View Article: PubMed Central - PubMed

Affiliation: Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany.

ABSTRACT
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

No MeSH data available.


Related in: MedlinePlus

Comparison of the different implementations of the Knill-Kersten graphical model (Figure 1). LIF (green), LIF with noised parameters (yellow), LIF with small cross-correlations between noise channels (orange), mLIF PSPs mediated by a superposition of LIF PSP kernels (gray), abstract model with alpha-shaped PSPs (blue), abstract model with rectangular PSPs (red) and analytically calculated (black). The error bars for the noised LIF networks represent the standard error over 10 trials with different noised parameters. All other error bars represent the standard error over 10 trials with identical parameters. (A) Comparison of the four used PSP shapes. (B,C) Inferred marginals of the hidden variables Z1 and Z2 conditioned on the observed (clamped) states of Z3 and Z4. In (B) (Z3, Z4) = (1, 1). In (C) (Z3, Z4) = (1, 0). The duration of a single simulations is 10 s. (D) Marginal probabilities of the hidden variables reacting to a change in the evidence Z4 = 1 → 0. The change in firing rates (top) appears slower than the one in the raster plot (bottom) due to the smearing effect of the box filter used to translate spike times into firing rates. (E,F) Convergence toward the unconstrained equilibrium distributions compared to the target distribution. In (D) the performance of the four different PSP shapes from (A) is shown. The abstract model with rectangular PSPs converges to DnormKL = 0, since it is guaranteed to sample from the correct distribution in the limit t → ∞. In (E) the performance of the three different LIF implementations is shown.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4325917&req=5

Figure 3: Comparison of the different implementations of the Knill-Kersten graphical model (Figure 1). LIF (green), LIF with noised parameters (yellow), LIF with small cross-correlations between noise channels (orange), mLIF PSPs mediated by a superposition of LIF PSP kernels (gray), abstract model with alpha-shaped PSPs (blue), abstract model with rectangular PSPs (red) and analytically calculated (black). The error bars for the noised LIF networks represent the standard error over 10 trials with different noised parameters. All other error bars represent the standard error over 10 trials with identical parameters. (A) Comparison of the four used PSP shapes. (B,C) Inferred marginals of the hidden variables Z1 and Z2 conditioned on the observed (clamped) states of Z3 and Z4. In (B) (Z3, Z4) = (1, 1). In (C) (Z3, Z4) = (1, 0). The duration of a single simulations is 10 s. (D) Marginal probabilities of the hidden variables reacting to a change in the evidence Z4 = 1 → 0. The change in firing rates (top) appears slower than the one in the raster plot (bottom) due to the smearing effect of the box filter used to translate spike times into firing rates. (E,F) Convergence toward the unconstrained equilibrium distributions compared to the target distribution. In (D) the performance of the four different PSP shapes from (A) is shown. The abstract model with rectangular PSPs converges to DnormKL = 0, since it is guaranteed to sample from the correct distribution in the limit t → ∞. In (E) the performance of the three different LIF implementations is shown.

Mentions: Figure 3A shows the shape of such an LIF PSP with parameter values taken from Table 1. The shape is practically exponential, due to the extremely short effective membrane time constant in the HCS. We will later compare the performance of the LIF implementation to two implementations of the abstract model from Section 2.2: neurons with theoretically optimal rectangular PSPs of duration τref, the temporal evolution of which is defined as(20)u(t)={1 if 0<t<τref ,0 otherwiseand neurons with alpha-shaped PSPs with the temporal evolution(21)u(t)={q1·[e·(tτα+t1)  ·exp(−tτα−t1)−0.5] if 0<t<(t2−t1)τα ,0  otherwise .


Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K - Front Comput Neurosci (2015)

Comparison of the different implementations of the Knill-Kersten graphical model (Figure 1). LIF (green), LIF with noised parameters (yellow), LIF with small cross-correlations between noise channels (orange), mLIF PSPs mediated by a superposition of LIF PSP kernels (gray), abstract model with alpha-shaped PSPs (blue), abstract model with rectangular PSPs (red) and analytically calculated (black). The error bars for the noised LIF networks represent the standard error over 10 trials with different noised parameters. All other error bars represent the standard error over 10 trials with identical parameters. (A) Comparison of the four used PSP shapes. (B,C) Inferred marginals of the hidden variables Z1 and Z2 conditioned on the observed (clamped) states of Z3 and Z4. In (B) (Z3, Z4) = (1, 1). In (C) (Z3, Z4) = (1, 0). The duration of a single simulations is 10 s. (D) Marginal probabilities of the hidden variables reacting to a change in the evidence Z4 = 1 → 0. The change in firing rates (top) appears slower than the one in the raster plot (bottom) due to the smearing effect of the box filter used to translate spike times into firing rates. (E,F) Convergence toward the unconstrained equilibrium distributions compared to the target distribution. In (D) the performance of the four different PSP shapes from (A) is shown. The abstract model with rectangular PSPs converges to DnormKL = 0, since it is guaranteed to sample from the correct distribution in the limit t → ∞. In (E) the performance of the three different LIF implementations is shown.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4325917&req=5

Figure 3: Comparison of the different implementations of the Knill-Kersten graphical model (Figure 1). LIF (green), LIF with noised parameters (yellow), LIF with small cross-correlations between noise channels (orange), mLIF PSPs mediated by a superposition of LIF PSP kernels (gray), abstract model with alpha-shaped PSPs (blue), abstract model with rectangular PSPs (red) and analytically calculated (black). The error bars for the noised LIF networks represent the standard error over 10 trials with different noised parameters. All other error bars represent the standard error over 10 trials with identical parameters. (A) Comparison of the four used PSP shapes. (B,C) Inferred marginals of the hidden variables Z1 and Z2 conditioned on the observed (clamped) states of Z3 and Z4. In (B) (Z3, Z4) = (1, 1). In (C) (Z3, Z4) = (1, 0). The duration of a single simulations is 10 s. (D) Marginal probabilities of the hidden variables reacting to a change in the evidence Z4 = 1 → 0. The change in firing rates (top) appears slower than the one in the raster plot (bottom) due to the smearing effect of the box filter used to translate spike times into firing rates. (E,F) Convergence toward the unconstrained equilibrium distributions compared to the target distribution. In (D) the performance of the four different PSP shapes from (A) is shown. The abstract model with rectangular PSPs converges to DnormKL = 0, since it is guaranteed to sample from the correct distribution in the limit t → ∞. In (E) the performance of the three different LIF implementations is shown.
Mentions: Figure 3A shows the shape of such an LIF PSP with parameter values taken from Table 1. The shape is practically exponential, due to the extremely short effective membrane time constant in the HCS. We will later compare the performance of the LIF implementation to two implementations of the abstract model from Section 2.2: neurons with theoretically optimal rectangular PSPs of duration τref, the temporal evolution of which is defined as(20)u(t)={1 if 0<t<τref ,0 otherwiseand neurons with alpha-shaped PSPs with the temporal evolution(21)u(t)={q1·[e·(tτα+t1)  ·exp(−tτα−t1)−0.5] if 0<t<(t2−t1)τα ,0  otherwise .

Bottom Line: Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level.As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains.Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

View Article: PubMed Central - PubMed

Affiliation: Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany.

ABSTRACT
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

No MeSH data available.


Related in: MedlinePlus