Limits...
Causal Inference and Explaining Away in a Spiking Network.

Moreno-Bote R, Drugowitsch J - Sci Rep (2015)

Bottom Line: Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons.The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities.This type of network might underlie tasks such as odor identification and classification.

View Article: PubMed Central - PubMed

Affiliation: Department of Technologies of Information and Communication, University Pompeu Fabra, 08018 Barcelona, Spain.

ABSTRACT
While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of network might underlie tasks such as odor identification and classification.

No MeSH data available.


A spiking network can exactly solve a high-dimensional causal inference problem.(a,b) Generative model. A potentially very large number N of hidden causes generate an observation (a). Each cause i is represented as an entry of the N dimensional vector r, and it is characterized by a non-negative number, ri ≥ 0, called cause coefficient. The cause coefficient ri indicates both presence of cause i, if non-zero, and its strength, such as contrast or concentration. Associated to each cause i there is a feature vector ui of dimension M. The observation μ is a linear combination of the feature vectors –causes– weighted by non-negative cause coefficients ri and corrupted by noise (b). (c) A network of integrate-and-fire neurons with tuned inhibition implements dynamic, spike-based explaining away and solves a causal inference problem corresponding to quadratic programming with non-negativity constraints. Global inhibition (α term) and renormalized reset voltages (β term) implement L1 and L2 regularization, respectively.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4664919&req=5

f1: A spiking network can exactly solve a high-dimensional causal inference problem.(a,b) Generative model. A potentially very large number N of hidden causes generate an observation (a). Each cause i is represented as an entry of the N dimensional vector r, and it is characterized by a non-negative number, ri ≥ 0, called cause coefficient. The cause coefficient ri indicates both presence of cause i, if non-zero, and its strength, such as contrast or concentration. Associated to each cause i there is a feature vector ui of dimension M. The observation μ is a linear combination of the feature vectors –causes– weighted by non-negative cause coefficients ri and corrupted by noise (b). (c) A network of integrate-and-fire neurons with tuned inhibition implements dynamic, spike-based explaining away and solves a causal inference problem corresponding to quadratic programming with non-negativity constraints. Global inhibition (α term) and renormalized reset voltages (β term) implement L1 and L2 regularization, respectively.

Mentions: We consider a high-dimensional inference problem where an arbitrary combination of N causes can generate an observation (Fig. 1a). The observation is described by an “input” vector μ of dimension M (e.g., gray levels of an image with M pixels, or M-dimensional chemical composition of an odor), which is generated as a linear combination of causes corrupted by noise,


Causal Inference and Explaining Away in a Spiking Network.

Moreno-Bote R, Drugowitsch J - Sci Rep (2015)

A spiking network can exactly solve a high-dimensional causal inference problem.(a,b) Generative model. A potentially very large number N of hidden causes generate an observation (a). Each cause i is represented as an entry of the N dimensional vector r, and it is characterized by a non-negative number, ri ≥ 0, called cause coefficient. The cause coefficient ri indicates both presence of cause i, if non-zero, and its strength, such as contrast or concentration. Associated to each cause i there is a feature vector ui of dimension M. The observation μ is a linear combination of the feature vectors –causes– weighted by non-negative cause coefficients ri and corrupted by noise (b). (c) A network of integrate-and-fire neurons with tuned inhibition implements dynamic, spike-based explaining away and solves a causal inference problem corresponding to quadratic programming with non-negativity constraints. Global inhibition (α term) and renormalized reset voltages (β term) implement L1 and L2 regularization, respectively.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4664919&req=5

f1: A spiking network can exactly solve a high-dimensional causal inference problem.(a,b) Generative model. A potentially very large number N of hidden causes generate an observation (a). Each cause i is represented as an entry of the N dimensional vector r, and it is characterized by a non-negative number, ri ≥ 0, called cause coefficient. The cause coefficient ri indicates both presence of cause i, if non-zero, and its strength, such as contrast or concentration. Associated to each cause i there is a feature vector ui of dimension M. The observation μ is a linear combination of the feature vectors –causes– weighted by non-negative cause coefficients ri and corrupted by noise (b). (c) A network of integrate-and-fire neurons with tuned inhibition implements dynamic, spike-based explaining away and solves a causal inference problem corresponding to quadratic programming with non-negativity constraints. Global inhibition (α term) and renormalized reset voltages (β term) implement L1 and L2 regularization, respectively.
Mentions: We consider a high-dimensional inference problem where an arbitrary combination of N causes can generate an observation (Fig. 1a). The observation is described by an “input” vector μ of dimension M (e.g., gray levels of an image with M pixels, or M-dimensional chemical composition of an odor), which is generated as a linear combination of causes corrupted by noise,

Bottom Line: Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons.The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities.This type of network might underlie tasks such as odor identification and classification.

View Article: PubMed Central - PubMed

Affiliation: Department of Technologies of Information and Communication, University Pompeu Fabra, 08018 Barcelona, Spain.

ABSTRACT
While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of network might underlie tasks such as odor identification and classification.

No MeSH data available.