Limits...
Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K - Front Comput Neurosci (2015)

Bottom Line: Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level.As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains.Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

View Article: PubMed Central - PubMed

Affiliation: Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany.

ABSTRACT
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

No MeSH data available.


Related in: MedlinePlus

In order to establish a coupling which is closer to the ideal one (rectangular PSP), the following network structure was set up: Instead of using one principal neuron ν per RV, each RV is represented by a neural chain. In addition to the network connections imposed by the translation of the modeled Bayesian graph, feedforward connections between the neurons in this chain are also generated. Furthermore, each of the chain neurons projects onto the first neuron of the postsynaptic interneuron chain (here: all connections from νi1 to ν12). By choosing appropriate synaptic efficacies and delays, the chain generates a superposition of single PSP kernels that results in a sawtooth-like shape which is closer to the desired rectangular shape than a single PSP.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4325917&req=5

Figure 4: In order to establish a coupling which is closer to the ideal one (rectangular PSP), the following network structure was set up: Instead of using one principal neuron ν per RV, each RV is represented by a neural chain. In addition to the network connections imposed by the translation of the modeled Bayesian graph, feedforward connections between the neurons in this chain are also generated. Furthermore, each of the chain neurons projects onto the first neuron of the postsynaptic interneuron chain (here: all connections from νi1 to ν12). By choosing appropriate synaptic efficacies and delays, the chain generates a superposition of single PSP kernels that results in a sawtooth-like shape which is closer to the desired rectangular shape than a single PSP.

Mentions: In order to reduce this discrepancy, we replaced the single-PSP-interaction between pairs of neurons by a superposition of LIF PSP kernels. For this, we replaced the single neuron that coded for an RV by a chain of neurons (see Figure 4). In this setup, the first neuron in a chain is considered the “main” neuron, and only the spikes it emits are considered to encode the state zk = 1. However, all neurons from a chain project onto the main neuron of the chain representing a related RV. This neuron then registers a superposition of PSPs, which can be adjusted (e.g., with the parameter values from Table 2) to closely approximate the ideal rectangular shape by appropriately setting synaptic weights and delays within as well as between the chains. In particular, the long tail of the last PSP is cut off by setting the effect of the last neuron in the chain to oppose the effect of all the others (e.g., if the interaction between the RVs is to be positive, all neurons in the chain project with excitatory synapses onto their target, while the last one has an inhibitory outgoing connection). While this implementation only scales the number of network components (neurons and synapses) linearly with the chosen length of the chains, it improves the sampling results significantly (Figures 3B,C,E gray bars/traces).


Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K - Front Comput Neurosci (2015)

In order to establish a coupling which is closer to the ideal one (rectangular PSP), the following network structure was set up: Instead of using one principal neuron ν per RV, each RV is represented by a neural chain. In addition to the network connections imposed by the translation of the modeled Bayesian graph, feedforward connections between the neurons in this chain are also generated. Furthermore, each of the chain neurons projects onto the first neuron of the postsynaptic interneuron chain (here: all connections from νi1 to ν12). By choosing appropriate synaptic efficacies and delays, the chain generates a superposition of single PSP kernels that results in a sawtooth-like shape which is closer to the desired rectangular shape than a single PSP.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4325917&req=5

Figure 4: In order to establish a coupling which is closer to the ideal one (rectangular PSP), the following network structure was set up: Instead of using one principal neuron ν per RV, each RV is represented by a neural chain. In addition to the network connections imposed by the translation of the modeled Bayesian graph, feedforward connections between the neurons in this chain are also generated. Furthermore, each of the chain neurons projects onto the first neuron of the postsynaptic interneuron chain (here: all connections from νi1 to ν12). By choosing appropriate synaptic efficacies and delays, the chain generates a superposition of single PSP kernels that results in a sawtooth-like shape which is closer to the desired rectangular shape than a single PSP.
Mentions: In order to reduce this discrepancy, we replaced the single-PSP-interaction between pairs of neurons by a superposition of LIF PSP kernels. For this, we replaced the single neuron that coded for an RV by a chain of neurons (see Figure 4). In this setup, the first neuron in a chain is considered the “main” neuron, and only the spikes it emits are considered to encode the state zk = 1. However, all neurons from a chain project onto the main neuron of the chain representing a related RV. This neuron then registers a superposition of PSPs, which can be adjusted (e.g., with the parameter values from Table 2) to closely approximate the ideal rectangular shape by appropriately setting synaptic weights and delays within as well as between the chains. In particular, the long tail of the last PSP is cut off by setting the effect of the last neuron in the chain to oppose the effect of all the others (e.g., if the interaction between the RVs is to be positive, all neurons in the chain project with excitatory synapses onto their target, while the last one has an inhibitory outgoing connection). While this implementation only scales the number of network components (neurons and synapses) linearly with the chosen length of the chains, it improves the sampling results significantly (Figures 3B,C,E gray bars/traces).

Bottom Line: Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level.As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains.Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

View Article: PubMed Central - PubMed

Affiliation: Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany.

ABSTRACT
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

No MeSH data available.


Related in: MedlinePlus