Limits...
Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K - Front Comput Neurosci (2015)

Bottom Line: Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level.As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains.Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

View Article: PubMed Central - PubMed

Affiliation: Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany.

ABSTRACT
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

No MeSH data available.


Related in: MedlinePlus

Formulation of an example inference problem as a Bayesian network and translation to a Boltzmann machine. (A) Knill-Kersten illusion from Knill and Kersten (1991). Although the four objects are identically shaded, the left cube is perceived as being darker than the right one. This illusion depends on the perceived shape of the objects and does not occur for, e.g., cylinders. (B) The setup can be translated to a Bayesian network with four binary RVs. The (latent) variables Z1 and Z2 encode the (unknown) reflectance profile and 3D shape of the objects, respectively. Conditioned on these variables, the (observed) shading and 2D contour are encoded by Z3 and Z4, respectively. Figure modified from Pecevski et al. (2011). (C) Representation of the Bayesian network from (B) as a Boltzmann machine. Factors of order higher than 2 are replaced by auxiliary variables as described in the main text. The individual connections with weights Mexc, Minh → ∞ between each principal and auxiliary variable have been omitted for clarity.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4325917&req=5

Figure 1: Formulation of an example inference problem as a Bayesian network and translation to a Boltzmann machine. (A) Knill-Kersten illusion from Knill and Kersten (1991). Although the four objects are identically shaded, the left cube is perceived as being darker than the right one. This illusion depends on the perceived shape of the objects and does not occur for, e.g., cylinders. (B) The setup can be translated to a Bayesian network with four binary RVs. The (latent) variables Z1 and Z2 encode the (unknown) reflectance profile and 3D shape of the objects, respectively. Conditioned on these variables, the (observed) shading and 2D contour are encoded by Z3 and Z4, respectively. Figure modified from Pecevski et al. (2011). (C) Representation of the Bayesian network from (B) as a Boltzmann machine. Factors of order higher than 2 are replaced by auxiliary variables as described in the main text. The individual connections with weights Mexc, Minh → ∞ between each principal and auxiliary variable have been omitted for clarity.

Mentions: Such a Bayesian network can be transformed into a second-order Markov random field (i.e., an MRF with a maximum clique size of 2). Here, we follow the recipe described in Pecevski et al. (2011). First and second-order factors are easily replaceable by potential functions Ψk(Zk) and Ψk(Zk1, Zk2), respectively. For each nth-order factor Φk with n > 2 principal RVs, we introduce 2n auxiliary binary RVs Xzk ∈ kk, where k is the set of all possible assignments of the binary vector Zk (Figure 1C). Each of these RVs “encode” the probability of a possible state zk within the factor Φk by introducing the first-order potential functions Ψzkk(Xzkk = 1) = Φk(Zk = zk). The factor Φk(Zk) is then replaced by a product over potential functions(2)Φk(Zk)=∏zkΨkzk(Xkzk)∏i = 1nχkizk(Zki,Xkzk),where an auxiliary RV Xzkk is active if and only if the principal RVs Zk are active in the configuration zk. Formally, this corresponds to the assignment: χzkki(Zki, Xzkk) = 1 − Xzkk (1 − δZki,zki). In the graphical representation, this amounts to removing all directed edges within the factors and replacing them by undirected edges from the principal to the auxiliary RVs. It can then be verified (Pecevski et al., 2011) that the target probability distribution can be represented as a marginal over the auxiliary variables.


Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K - Front Comput Neurosci (2015)

Formulation of an example inference problem as a Bayesian network and translation to a Boltzmann machine. (A) Knill-Kersten illusion from Knill and Kersten (1991). Although the four objects are identically shaded, the left cube is perceived as being darker than the right one. This illusion depends on the perceived shape of the objects and does not occur for, e.g., cylinders. (B) The setup can be translated to a Bayesian network with four binary RVs. The (latent) variables Z1 and Z2 encode the (unknown) reflectance profile and 3D shape of the objects, respectively. Conditioned on these variables, the (observed) shading and 2D contour are encoded by Z3 and Z4, respectively. Figure modified from Pecevski et al. (2011). (C) Representation of the Bayesian network from (B) as a Boltzmann machine. Factors of order higher than 2 are replaced by auxiliary variables as described in the main text. The individual connections with weights Mexc, Minh → ∞ between each principal and auxiliary variable have been omitted for clarity.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4325917&req=5

Figure 1: Formulation of an example inference problem as a Bayesian network and translation to a Boltzmann machine. (A) Knill-Kersten illusion from Knill and Kersten (1991). Although the four objects are identically shaded, the left cube is perceived as being darker than the right one. This illusion depends on the perceived shape of the objects and does not occur for, e.g., cylinders. (B) The setup can be translated to a Bayesian network with four binary RVs. The (latent) variables Z1 and Z2 encode the (unknown) reflectance profile and 3D shape of the objects, respectively. Conditioned on these variables, the (observed) shading and 2D contour are encoded by Z3 and Z4, respectively. Figure modified from Pecevski et al. (2011). (C) Representation of the Bayesian network from (B) as a Boltzmann machine. Factors of order higher than 2 are replaced by auxiliary variables as described in the main text. The individual connections with weights Mexc, Minh → ∞ between each principal and auxiliary variable have been omitted for clarity.
Mentions: Such a Bayesian network can be transformed into a second-order Markov random field (i.e., an MRF with a maximum clique size of 2). Here, we follow the recipe described in Pecevski et al. (2011). First and second-order factors are easily replaceable by potential functions Ψk(Zk) and Ψk(Zk1, Zk2), respectively. For each nth-order factor Φk with n > 2 principal RVs, we introduce 2n auxiliary binary RVs Xzk ∈ kk, where k is the set of all possible assignments of the binary vector Zk (Figure 1C). Each of these RVs “encode” the probability of a possible state zk within the factor Φk by introducing the first-order potential functions Ψzkk(Xzkk = 1) = Φk(Zk = zk). The factor Φk(Zk) is then replaced by a product over potential functions(2)Φk(Zk)=∏zkΨkzk(Xkzk)∏i = 1nχkizk(Zki,Xkzk),where an auxiliary RV Xzkk is active if and only if the principal RVs Zk are active in the configuration zk. Formally, this corresponds to the assignment: χzkki(Zki, Xzkk) = 1 − Xzkk (1 − δZki,zki). In the graphical representation, this amounts to removing all directed edges within the factors and replacing them by undirected edges from the principal to the auxiliary RVs. It can then be verified (Pecevski et al., 2011) that the target probability distribution can be represented as a marginal over the auxiliary variables.

Bottom Line: Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level.As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains.Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

View Article: PubMed Central - PubMed

Affiliation: Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany.

ABSTRACT
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

No MeSH data available.


Related in: MedlinePlus