The interplay of plasticity and adaptation in neural circuits: a generative model.
Bottom Line:
The neural circuit is able to generate spontaneous patterns of activity that reproduce exactly the probability distribution of experienced stimuli.In particular, the landscape of the phase space includes a large number of stable states (attractors) that sample precisely this prior distribution.This work demonstrates that the interplay between distinct dynamical processes gives rise to useful computation, and proposes a framework in which neural circuit models for Bayesian inference may be developed in the future.
View Article:
PubMed Central - PubMed
Affiliation: School of Engineering and Science, Jacobs University Bremen Bremen, Germany.
ABSTRACT
Multiple neural and synaptic phenomena take place in the brain. They operate over a broad range of timescales, and the consequences of their interplay are still unclear. In this work, I study a computational model of a recurrent neural network in which two dynamic processes take place: sensory adaptation and synaptic plasticity. Both phenomena are ubiquitous in the brain, but their dynamic interplay has not been investigated. I show that when both processes are included, the neural circuit is able to perform a specific computation: it becomes a generative model for certain distributions of input stimuli. The neural circuit is able to generate spontaneous patterns of activity that reproduce exactly the probability distribution of experienced stimuli. In particular, the landscape of the phase space includes a large number of stable states (attractors) that sample precisely this prior distribution. This work demonstrates that the interplay between distinct dynamical processes gives rise to useful computation, and proposes a framework in which neural circuit models for Bayesian inference may be developed in the future. No MeSH data available. |
Related In:
Results -
Collection
License getmorefigures.php?uid=PMC4214225&req=5
Mentions: An illustration of the tuning curves is presented in Figure 1. I define μi as the “tuning offset”: different neurons have different offsets, but the same shape of the tuning curve. The results of most simulation are shown for the sigmoidal tuning curve (3), but very similar results have been obtained for a periodic tuning curve (4) (see Appendix). The parameter β is positive, and its specific value is irrelevant, since the neuron output is binary, given by Equation (2), and the internal current is zero during the stimulus. In a given simulation, the probability distribution P(α) is taken from a parametric family (see e.g., Figure 3), and different parameters are drawn at random in different simulations (the distribution equals the square of a Fourier series with random coefficients truncated at five terms). |
View Article: PubMed Central - PubMed
Affiliation: School of Engineering and Science, Jacobs University Bremen Bremen, Germany.
No MeSH data available.