Limits...
Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K - Front Comput Neurosci (2015)

Bottom Line: Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level.As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains.Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

View Article: PubMed Central - PubMed

Affiliation: Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany.

ABSTRACT
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

No MeSH data available.


Related in: MedlinePlus

Sampling from random distributions over 5 RVs with different networks: LIF (green), mLIF (gray), abstract model with alpha-shaped PSPs (blue) and abstract model with rectangular PSPs (red). (A) Distributions for different values of η from which conditionals are drawn. (B)DnormKL between the equilibrium and target distributions as a function of η. The error bars denote the standard error over 30 different random graphs drawn from the same distribution. (C) Evolution of the DnormKL over time for a sample network drawn from the distribution with η = 1. Error bars denote the standard error over 10 trials.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4325917&req=5

Figure 5: Sampling from random distributions over 5 RVs with different networks: LIF (green), mLIF (gray), abstract model with alpha-shaped PSPs (blue) and abstract model with rectangular PSPs (red). (A) Distributions for different values of η from which conditionals are drawn. (B)DnormKL between the equilibrium and target distributions as a function of η. The error bars denote the standard error over 30 different random graphs drawn from the same distribution. (C) Evolution of the DnormKL over time for a sample network drawn from the distribution with η = 1. Error bars denote the standard error over 10 trials.

Mentions: In order to study the general applicability of the proposed approach, we quantified the convergence behavior of LIF networks generated from random Bayesian graphs. Here, we used a method proposed in Ide and Cozman (2002) to generate random Bayesian networks with K binary RVs and random conditional probabilities. The algorithm starts with a chain graph Z1 → Z2 → … → ZK and runs for N iterations. In each iteration step, random RV pairs (Zi, Zj) with i > j are created. If the connection Zi → Zj does not exist, it is added to the graph, otherwise it removed, with two constraints: any pair of nodes may not have more than 7 connections to other nodes and the procedure may not disconnect the graph. For every possible assignment of pai, the conditional probabilities ppaii: = p(Zi = 1/pai) are drawn from a second-order Dirichlet distribution(28)D(pipai,η1,η2)=1B(η1,η2)(pipai)η1−1(1−pipai)η2−1 ,with the multinomial Beta function(29)B(η1,η2)=∏i=12Γ(ηi)Γ(∑i=12ηi) ,where Γ(·) denotes the gamma function. We chose the parameters η1 = η2 =: η in order to obtain a symmetrical distribution. Figure 5A shows three examples of a symmetrical two-dimensional Dirichlet distribution. A larger η favors conditional probabilities which are closer to 0.5 than to the boundaries 0 and 1.


Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K - Front Comput Neurosci (2015)

Sampling from random distributions over 5 RVs with different networks: LIF (green), mLIF (gray), abstract model with alpha-shaped PSPs (blue) and abstract model with rectangular PSPs (red). (A) Distributions for different values of η from which conditionals are drawn. (B)DnormKL between the equilibrium and target distributions as a function of η. The error bars denote the standard error over 30 different random graphs drawn from the same distribution. (C) Evolution of the DnormKL over time for a sample network drawn from the distribution with η = 1. Error bars denote the standard error over 10 trials.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4325917&req=5

Figure 5: Sampling from random distributions over 5 RVs with different networks: LIF (green), mLIF (gray), abstract model with alpha-shaped PSPs (blue) and abstract model with rectangular PSPs (red). (A) Distributions for different values of η from which conditionals are drawn. (B)DnormKL between the equilibrium and target distributions as a function of η. The error bars denote the standard error over 30 different random graphs drawn from the same distribution. (C) Evolution of the DnormKL over time for a sample network drawn from the distribution with η = 1. Error bars denote the standard error over 10 trials.
Mentions: In order to study the general applicability of the proposed approach, we quantified the convergence behavior of LIF networks generated from random Bayesian graphs. Here, we used a method proposed in Ide and Cozman (2002) to generate random Bayesian networks with K binary RVs and random conditional probabilities. The algorithm starts with a chain graph Z1 → Z2 → … → ZK and runs for N iterations. In each iteration step, random RV pairs (Zi, Zj) with i > j are created. If the connection Zi → Zj does not exist, it is added to the graph, otherwise it removed, with two constraints: any pair of nodes may not have more than 7 connections to other nodes and the procedure may not disconnect the graph. For every possible assignment of pai, the conditional probabilities ppaii: = p(Zi = 1/pai) are drawn from a second-order Dirichlet distribution(28)D(pipai,η1,η2)=1B(η1,η2)(pipai)η1−1(1−pipai)η2−1 ,with the multinomial Beta function(29)B(η1,η2)=∏i=12Γ(ηi)Γ(∑i=12ηi) ,where Γ(·) denotes the gamma function. We chose the parameters η1 = η2 =: η in order to obtain a symmetrical distribution. Figure 5A shows three examples of a symmetrical two-dimensional Dirichlet distribution. A larger η favors conditional probabilities which are closer to 0.5 than to the boundaries 0 and 1.

Bottom Line: Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level.As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains.Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

View Article: PubMed Central - PubMed

Affiliation: Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany.

ABSTRACT
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

No MeSH data available.


Related in: MedlinePlus