Limits...
The interplay of plasticity and adaptation in neural circuits: a generative model.

Bernacchia A - Front Synaptic Neurosci (2014)

Bottom Line: The neural circuit is able to generate spontaneous patterns of activity that reproduce exactly the probability distribution of experienced stimuli.In particular, the landscape of the phase space includes a large number of stable states (attractors) that sample precisely this prior distribution.This work demonstrates that the interplay between distinct dynamical processes gives rise to useful computation, and proposes a framework in which neural circuit models for Bayesian inference may be developed in the future.

View Article: PubMed Central - PubMed

Affiliation: School of Engineering and Science, Jacobs University Bremen Bremen, Germany.

ABSTRACT
Multiple neural and synaptic phenomena take place in the brain. They operate over a broad range of timescales, and the consequences of their interplay are still unclear. In this work, I study a computational model of a recurrent neural network in which two dynamic processes take place: sensory adaptation and synaptic plasticity. Both phenomena are ubiquitous in the brain, but their dynamic interplay has not been investigated. I show that when both processes are included, the neural circuit is able to perform a specific computation: it becomes a generative model for certain distributions of input stimuli. The neural circuit is able to generate spontaneous patterns of activity that reproduce exactly the probability distribution of experienced stimuli. In particular, the landscape of the phase space includes a large number of stable states (attractors) that sample precisely this prior distribution. This work demonstrates that the interplay between distinct dynamical processes gives rise to useful computation, and proposes a framework in which neural circuit models for Bayesian inference may be developed in the future.

No MeSH data available.


Schematic illustration of the neural circuit model with its tuning curves and recurrent connections. Each circle represents one neuron and each arrow a synaptic connection. Each rectangle shows a tuning curve for one neuron, namely the external current afferent to that neuron plotted as a function of the stimulus value. Left: sigmoidal tuning curves. Right: periodic tuning curves.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4214225&req=5

Figure 1: Schematic illustration of the neural circuit model with its tuning curves and recurrent connections. Each circle represents one neuron and each arrow a synaptic connection. Each rectangle shows a tuning curve for one neuron, namely the external current afferent to that neuron plotted as a function of the stimulus value. Left: sigmoidal tuning curves. Right: periodic tuning curves.

Mentions: An illustration of the tuning curves is presented in Figure 1. I define μi as the “tuning offset”: different neurons have different offsets, but the same shape of the tuning curve. The results of most simulation are shown for the sigmoidal tuning curve (3), but very similar results have been obtained for a periodic tuning curve (4) (see Appendix). The parameter β is positive, and its specific value is irrelevant, since the neuron output is binary, given by Equation (2), and the internal current is zero during the stimulus. In a given simulation, the probability distribution P(α) is taken from a parametric family (see e.g., Figure 3), and different parameters are drawn at random in different simulations (the distribution equals the square of a Fourier series with random coefficients truncated at five terms).


The interplay of plasticity and adaptation in neural circuits: a generative model.

Bernacchia A - Front Synaptic Neurosci (2014)

Schematic illustration of the neural circuit model with its tuning curves and recurrent connections. Each circle represents one neuron and each arrow a synaptic connection. Each rectangle shows a tuning curve for one neuron, namely the external current afferent to that neuron plotted as a function of the stimulus value. Left: sigmoidal tuning curves. Right: periodic tuning curves.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4214225&req=5

Figure 1: Schematic illustration of the neural circuit model with its tuning curves and recurrent connections. Each circle represents one neuron and each arrow a synaptic connection. Each rectangle shows a tuning curve for one neuron, namely the external current afferent to that neuron plotted as a function of the stimulus value. Left: sigmoidal tuning curves. Right: periodic tuning curves.
Mentions: An illustration of the tuning curves is presented in Figure 1. I define μi as the “tuning offset”: different neurons have different offsets, but the same shape of the tuning curve. The results of most simulation are shown for the sigmoidal tuning curve (3), but very similar results have been obtained for a periodic tuning curve (4) (see Appendix). The parameter β is positive, and its specific value is irrelevant, since the neuron output is binary, given by Equation (2), and the internal current is zero during the stimulus. In a given simulation, the probability distribution P(α) is taken from a parametric family (see e.g., Figure 3), and different parameters are drawn at random in different simulations (the distribution equals the square of a Fourier series with random coefficients truncated at five terms).

Bottom Line: The neural circuit is able to generate spontaneous patterns of activity that reproduce exactly the probability distribution of experienced stimuli.In particular, the landscape of the phase space includes a large number of stable states (attractors) that sample precisely this prior distribution.This work demonstrates that the interplay between distinct dynamical processes gives rise to useful computation, and proposes a framework in which neural circuit models for Bayesian inference may be developed in the future.

View Article: PubMed Central - PubMed

Affiliation: School of Engineering and Science, Jacobs University Bremen Bremen, Germany.

ABSTRACT
Multiple neural and synaptic phenomena take place in the brain. They operate over a broad range of timescales, and the consequences of their interplay are still unclear. In this work, I study a computational model of a recurrent neural network in which two dynamic processes take place: sensory adaptation and synaptic plasticity. Both phenomena are ubiquitous in the brain, but their dynamic interplay has not been investigated. I show that when both processes are included, the neural circuit is able to perform a specific computation: it becomes a generative model for certain distributions of input stimuli. The neural circuit is able to generate spontaneous patterns of activity that reproduce exactly the probability distribution of experienced stimuli. In particular, the landscape of the phase space includes a large number of stable states (attractors) that sample precisely this prior distribution. This work demonstrates that the interplay between distinct dynamical processes gives rise to useful computation, and proposes a framework in which neural circuit models for Bayesian inference may be developed in the future.

No MeSH data available.