Limits...
Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K - Front Comput Neurosci (2015)

Bottom Line: Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level.As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains.Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

View Article: PubMed Central - PubMed

Affiliation: Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany.

ABSTRACT
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

No MeSH data available.


Related in: MedlinePlus

Neural sampling: abstract model vs. implementation with LIF neurons. (A) Illustration of the Markov chain over the refractory variable ζk in the abstract model. Figure taken from Buesing et al. (2011). (B) Example dynamics of all the variables associated with an abstract model neuron. (C) Example dynamics of the equivalent variables associated with an LIF neuron. (D) Free membrane potential distribution and activation function of an LIF neuron: theoretical prediction vs. experimental results. The blue crosses are the mean values of 5 simulations of duration 200 s. The error bars are smaller than the size of the symbols. Table 1 lists the used parameter values of the LIF neuron. (E) Performance of sampling with LIF neurons from a randomly chosen Boltzmann distribution over 5 binary RVs. Both weights and biases are chosen from a normal distribution  (μ = 0, σ = 0.5). The green bars are the results of 10 simulations of duration 100 s. The error bars denote the standard error.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4325917&req=5

Figure 2: Neural sampling: abstract model vs. implementation with LIF neurons. (A) Illustration of the Markov chain over the refractory variable ζk in the abstract model. Figure taken from Buesing et al. (2011). (B) Example dynamics of all the variables associated with an abstract model neuron. (C) Example dynamics of the equivalent variables associated with an LIF neuron. (D) Free membrane potential distribution and activation function of an LIF neuron: theoretical prediction vs. experimental results. The blue crosses are the mean values of 5 simulations of duration 200 s. The error bars are smaller than the size of the symbols. Table 1 lists the used parameter values of the LIF neuron. (E) Performance of sampling with LIF neurons from a randomly chosen Boltzmann distribution over 5 binary RVs. Both weights and biases are chosen from a normal distribution (μ = 0, σ = 0.5). The green bars are the results of 10 simulations of duration 100 s. The error bars denote the standard error.

Mentions: In this model, the spike response of a neuron is associated to the state zk of an RV Zk and a spike is interpreted as a state switch from 0 to 1. Each spike is followed by a refractory period of duration τ, during which the neuron remains in the state Zk = 1. The so-called neural computability condition (NCC) provides a sufficient condition for correct sampling, wherein a neuron's “knowledge” about the state of the rest of the network - and therefore its probability to spike - is encoded in its membrane potential:(8)vk(t)=logp(Zk(t)=1/Z\k(t))p(Zk(t)=0/Z\k(t)),where Z\k(t) denotes the vector of all other variables Zi with i ≠ k. By solving for Zk(t) = 1, one obtains a logistic neural activation function (Figure 2D), which is reminiscent of the update rules in Gibbs sampling:(9)p(Zk(t)=1/z\k(t))=σ(vk(t)):=11+exp(−vk(t)),


Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K - Front Comput Neurosci (2015)

Neural sampling: abstract model vs. implementation with LIF neurons. (A) Illustration of the Markov chain over the refractory variable ζk in the abstract model. Figure taken from Buesing et al. (2011). (B) Example dynamics of all the variables associated with an abstract model neuron. (C) Example dynamics of the equivalent variables associated with an LIF neuron. (D) Free membrane potential distribution and activation function of an LIF neuron: theoretical prediction vs. experimental results. The blue crosses are the mean values of 5 simulations of duration 200 s. The error bars are smaller than the size of the symbols. Table 1 lists the used parameter values of the LIF neuron. (E) Performance of sampling with LIF neurons from a randomly chosen Boltzmann distribution over 5 binary RVs. Both weights and biases are chosen from a normal distribution  (μ = 0, σ = 0.5). The green bars are the results of 10 simulations of duration 100 s. The error bars denote the standard error.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4325917&req=5

Figure 2: Neural sampling: abstract model vs. implementation with LIF neurons. (A) Illustration of the Markov chain over the refractory variable ζk in the abstract model. Figure taken from Buesing et al. (2011). (B) Example dynamics of all the variables associated with an abstract model neuron. (C) Example dynamics of the equivalent variables associated with an LIF neuron. (D) Free membrane potential distribution and activation function of an LIF neuron: theoretical prediction vs. experimental results. The blue crosses are the mean values of 5 simulations of duration 200 s. The error bars are smaller than the size of the symbols. Table 1 lists the used parameter values of the LIF neuron. (E) Performance of sampling with LIF neurons from a randomly chosen Boltzmann distribution over 5 binary RVs. Both weights and biases are chosen from a normal distribution (μ = 0, σ = 0.5). The green bars are the results of 10 simulations of duration 100 s. The error bars denote the standard error.
Mentions: In this model, the spike response of a neuron is associated to the state zk of an RV Zk and a spike is interpreted as a state switch from 0 to 1. Each spike is followed by a refractory period of duration τ, during which the neuron remains in the state Zk = 1. The so-called neural computability condition (NCC) provides a sufficient condition for correct sampling, wherein a neuron's “knowledge” about the state of the rest of the network - and therefore its probability to spike - is encoded in its membrane potential:(8)vk(t)=logp(Zk(t)=1/Z\k(t))p(Zk(t)=0/Z\k(t)),where Z\k(t) denotes the vector of all other variables Zi with i ≠ k. By solving for Zk(t) = 1, one obtains a logistic neural activation function (Figure 2D), which is reminiscent of the update rules in Gibbs sampling:(9)p(Zk(t)=1/z\k(t))=σ(vk(t)):=11+exp(−vk(t)),

Bottom Line: Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level.As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains.Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

View Article: PubMed Central - PubMed

Affiliation: Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany.

ABSTRACT
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

No MeSH data available.


Related in: MedlinePlus