Limits...
Causal Inference and Explaining Away in a Spiking Network.

Moreno-Bote R, Drugowitsch J - Sci Rep (2015)

Bottom Line: Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons.The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities.This type of network might underlie tasks such as odor identification and classification.

View Article: PubMed Central - PubMed

Affiliation: Department of Technologies of Information and Communication, University Pompeu Fabra, 08018 Barcelona, Spain.

ABSTRACT
While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of network might underlie tasks such as odor identification and classification.

No MeSH data available.


Effects of regularization on population activity and performance in a network with an overcomplete basis of feature vectors.(a–d) Spiking activity for the standard network without regularization (a), with L1 regularization (b), L2 regularization (c), and voltage leak (d). In all cases input equals the 10th feature vector. (e) Angular error as a function of time (100 ms time windows).
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4664919&req=5

f3: Effects of regularization on population activity and performance in a network with an overcomplete basis of feature vectors.(a–d) Spiking activity for the standard network without regularization (a), with L1 regularization (b), L2 regularization (c), and voltage leak (d). In all cases input equals the 10th feature vector. (e) Angular error as a function of time (100 ms time windows).

Mentions: We also compared the behavior of the network with and without regularization terms in an overcomplete scenario, namely, in a case in which the dimensionality of the input vector was smaller than the number of features (Fig. 3). We stimulated the network with a single feature and studied its identification performance. We found that L1 regularization, implemented in our networks as global inhibition, creates a sparser representation of the input vector than the same network without L1 regularization (Fig. 3a,b). With L1 regularization, the network converges to the true input vector in just a few spikes (Fig. 3b), and the angular error decays to zero in a few hundreds of milliseconds (Fig. 3e). The reason for the convergence is that in our simulations the stored feature vectors are normalized to the same length, . In this case, L1 regularization always produces sparse representations of the stimulus vector if the stimulus coincides with one of the stored feature vectors. If stored feature vectors had unequal lengths, then the stimulation of one stored feature would have led to non-sparse firing. To make this clear, assume that due to the overcomplete representation of the stimulus space, the feature vector ui can be expressed as a sum of other feature vectors as , with j ≠ i and aj ≥ 0. If this is the case, there are at least two distinct activity patterns that can fully represent the stimulus μ = ui: the first one is a sparse one that consists of a single neuron (neuron i) firing at rate ri = 1 Hz and all other neurons being inactive, while the second activity pattern is a non-sparse pattern where neurons fire at rate rj = aj for all j ≠ i. However, given the equal normalization of all feature vectors, it is easy to see that , and therefore L1 regularization, which penalizes large total population activity, will favor the sparse over the dense pattern.


Causal Inference and Explaining Away in a Spiking Network.

Moreno-Bote R, Drugowitsch J - Sci Rep (2015)

Effects of regularization on population activity and performance in a network with an overcomplete basis of feature vectors.(a–d) Spiking activity for the standard network without regularization (a), with L1 regularization (b), L2 regularization (c), and voltage leak (d). In all cases input equals the 10th feature vector. (e) Angular error as a function of time (100 ms time windows).
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4664919&req=5

f3: Effects of regularization on population activity and performance in a network with an overcomplete basis of feature vectors.(a–d) Spiking activity for the standard network without regularization (a), with L1 regularization (b), L2 regularization (c), and voltage leak (d). In all cases input equals the 10th feature vector. (e) Angular error as a function of time (100 ms time windows).
Mentions: We also compared the behavior of the network with and without regularization terms in an overcomplete scenario, namely, in a case in which the dimensionality of the input vector was smaller than the number of features (Fig. 3). We stimulated the network with a single feature and studied its identification performance. We found that L1 regularization, implemented in our networks as global inhibition, creates a sparser representation of the input vector than the same network without L1 regularization (Fig. 3a,b). With L1 regularization, the network converges to the true input vector in just a few spikes (Fig. 3b), and the angular error decays to zero in a few hundreds of milliseconds (Fig. 3e). The reason for the convergence is that in our simulations the stored feature vectors are normalized to the same length, . In this case, L1 regularization always produces sparse representations of the stimulus vector if the stimulus coincides with one of the stored feature vectors. If stored feature vectors had unequal lengths, then the stimulation of one stored feature would have led to non-sparse firing. To make this clear, assume that due to the overcomplete representation of the stimulus space, the feature vector ui can be expressed as a sum of other feature vectors as , with j ≠ i and aj ≥ 0. If this is the case, there are at least two distinct activity patterns that can fully represent the stimulus μ = ui: the first one is a sparse one that consists of a single neuron (neuron i) firing at rate ri = 1 Hz and all other neurons being inactive, while the second activity pattern is a non-sparse pattern where neurons fire at rate rj = aj for all j ≠ i. However, given the equal normalization of all feature vectors, it is easy to see that , and therefore L1 regularization, which penalizes large total population activity, will favor the sparse over the dense pattern.

Bottom Line: Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons.The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities.This type of network might underlie tasks such as odor identification and classification.

View Article: PubMed Central - PubMed

Affiliation: Department of Technologies of Information and Communication, University Pompeu Fabra, 08018 Barcelona, Spain.

ABSTRACT
While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of network might underlie tasks such as odor identification and classification.

No MeSH data available.