Limits...
Towards a general theory of neural computation based on prediction by single neurons.

Fiorillo CD - PLoS ONE (2008)

Bottom Line: To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule.The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world.Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, Stanford University, Stanford, California, USA. chris@monkeybiz.stanford.edu

ABSTRACT
Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.

Show MeSH

Related in: MedlinePlus

Estimates of glutamate concentration by glutamate-gated cation channels (“layer 1”) and by voltage-gated K+ channels (“layer 2”) in a model neuron that has 100 channels of each type.See Methods and Text S1 for details. A. Estimates made by single two-state sensors in their on conformations (equations 5–7). The glutamate sensor (red) had an equilibrium dissociation constant (KD) of 500 µM. The voltage sensor (blue) had 4 elementary charges (z), and the voltage at which either state was equally likely (V1/2) was −50 mV. B. Glutamate concentration (magenta) was stepped from 10 to 1000 µM, which evoked a membrane depolarization that declined with time (black). C. The conductance of glutamate-gated cation channels and voltage-gated K+ channels. In each case the maximal possible conductance was 100. D. Maximum likelihood estimates and expected values of glutamate concentration conditional only on information present in the populations of sensors in layers 1 and 2. E–H. Probability distributions of glutamate concentrations at time points 1–4, as indicated in panel C. Each of these distributions is entirely conditional on the information of layer 1 or layer 2. Note that glutamate concentration is presented on a logarithmic scale, and that the y-axes differ in F and G relative to E and H.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2553191&req=5

pone-0003298-g002: Estimates of glutamate concentration by glutamate-gated cation channels (“layer 1”) and by voltage-gated K+ channels (“layer 2”) in a model neuron that has 100 channels of each type.See Methods and Text S1 for details. A. Estimates made by single two-state sensors in their on conformations (equations 5–7). The glutamate sensor (red) had an equilibrium dissociation constant (KD) of 500 µM. The voltage sensor (blue) had 4 elementary charges (z), and the voltage at which either state was equally likely (V1/2) was −50 mV. B. Glutamate concentration (magenta) was stepped from 10 to 1000 µM, which evoked a membrane depolarization that declined with time (black). C. The conductance of glutamate-gated cation channels and voltage-gated K+ channels. In each case the maximal possible conductance was 100. D. Maximum likelihood estimates and expected values of glutamate concentration conditional only on information present in the populations of sensors in layers 1 and 2. E–H. Probability distributions of glutamate concentrations at time points 1–4, as indicated in panel C. Each of these distributions is entirely conditional on the information of layer 1 or layer 2. Note that glutamate concentration is presented on a logarithmic scale, and that the y-axes differ in F and G relative to E and H.

Mentions: If a neuron possesses information about the intensity of its stimulus, then we can say that it estimates or predicts its stimulus (“estimate” and “predict” are used here as synonyms, and “prediction” could apply to the present as well as the future). To quantify a neuron's prediction, we would like to find the probability distribution of possible stimulus intensities conditional exclusively on the information possessed by the neuron. A neuron gathers information about its stimulus through sensors (Fig. S1), such as rhodopsin or glutamate receptors, which are coupled to ion channels and thereby influence the neuron's membrane voltage. As described in Methods, the Maxwell-Boltzmann equation of statistical mechanics (equation 5) specifies the likelihood of various stimulus intensities given the state of a sensor (Fig. 2). We can therefore determine the probability distribution of potential stimulus intensities conditional only on the information in one or more sensors (Fig. 2). Thus, merely by deploying sensors in its plasma membrane, the neuron performs the critical function of predicting stimulus intensity. The prediction is necessarily accompanied by a reduction in uncertainty (relative to the complete uncertainty and flat distribution that would accompany the absence of sensors), and in principle, the reduction in uncertainty can be precisely quantified.


Towards a general theory of neural computation based on prediction by single neurons.

Fiorillo CD - PLoS ONE (2008)

Estimates of glutamate concentration by glutamate-gated cation channels (“layer 1”) and by voltage-gated K+ channels (“layer 2”) in a model neuron that has 100 channels of each type.See Methods and Text S1 for details. A. Estimates made by single two-state sensors in their on conformations (equations 5–7). The glutamate sensor (red) had an equilibrium dissociation constant (KD) of 500 µM. The voltage sensor (blue) had 4 elementary charges (z), and the voltage at which either state was equally likely (V1/2) was −50 mV. B. Glutamate concentration (magenta) was stepped from 10 to 1000 µM, which evoked a membrane depolarization that declined with time (black). C. The conductance of glutamate-gated cation channels and voltage-gated K+ channels. In each case the maximal possible conductance was 100. D. Maximum likelihood estimates and expected values of glutamate concentration conditional only on information present in the populations of sensors in layers 1 and 2. E–H. Probability distributions of glutamate concentrations at time points 1–4, as indicated in panel C. Each of these distributions is entirely conditional on the information of layer 1 or layer 2. Note that glutamate concentration is presented on a logarithmic scale, and that the y-axes differ in F and G relative to E and H.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2553191&req=5

pone-0003298-g002: Estimates of glutamate concentration by glutamate-gated cation channels (“layer 1”) and by voltage-gated K+ channels (“layer 2”) in a model neuron that has 100 channels of each type.See Methods and Text S1 for details. A. Estimates made by single two-state sensors in their on conformations (equations 5–7). The glutamate sensor (red) had an equilibrium dissociation constant (KD) of 500 µM. The voltage sensor (blue) had 4 elementary charges (z), and the voltage at which either state was equally likely (V1/2) was −50 mV. B. Glutamate concentration (magenta) was stepped from 10 to 1000 µM, which evoked a membrane depolarization that declined with time (black). C. The conductance of glutamate-gated cation channels and voltage-gated K+ channels. In each case the maximal possible conductance was 100. D. Maximum likelihood estimates and expected values of glutamate concentration conditional only on information present in the populations of sensors in layers 1 and 2. E–H. Probability distributions of glutamate concentrations at time points 1–4, as indicated in panel C. Each of these distributions is entirely conditional on the information of layer 1 or layer 2. Note that glutamate concentration is presented on a logarithmic scale, and that the y-axes differ in F and G relative to E and H.
Mentions: If a neuron possesses information about the intensity of its stimulus, then we can say that it estimates or predicts its stimulus (“estimate” and “predict” are used here as synonyms, and “prediction” could apply to the present as well as the future). To quantify a neuron's prediction, we would like to find the probability distribution of possible stimulus intensities conditional exclusively on the information possessed by the neuron. A neuron gathers information about its stimulus through sensors (Fig. S1), such as rhodopsin or glutamate receptors, which are coupled to ion channels and thereby influence the neuron's membrane voltage. As described in Methods, the Maxwell-Boltzmann equation of statistical mechanics (equation 5) specifies the likelihood of various stimulus intensities given the state of a sensor (Fig. 2). We can therefore determine the probability distribution of potential stimulus intensities conditional only on the information in one or more sensors (Fig. 2). Thus, merely by deploying sensors in its plasma membrane, the neuron performs the critical function of predicting stimulus intensity. The prediction is necessarily accompanied by a reduction in uncertainty (relative to the complete uncertainty and flat distribution that would accompany the absence of sensors), and in principle, the reduction in uncertainty can be precisely quantified.

Bottom Line: To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule.The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world.Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, Stanford University, Stanford, California, USA. chris@monkeybiz.stanford.edu

ABSTRACT
Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.

Show MeSH
Related in: MedlinePlus