Limits...
Towards a general theory of neural computation based on prediction by single neurons.

Fiorillo CD - PLoS ONE (2008)

Bottom Line: To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule.The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world.Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, Stanford University, Stanford, California, USA. chris@monkeybiz.stanford.edu

ABSTRACT
Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.

Show MeSH
Schematic illustration of a model neuron.Arrows indicate the direction of information flow. A typical neuron receives inputs from the sensory periphery via glutamate, which depolarizes the membrane potential (“+”). The glutamate-gated ion channels and synapses that mediate this response are referred to as layer 1. They define the neuron's stimulus (the “excitatory center” of its receptive field). The function of layer 1 is to provide current information about the external world. Those individual inputs that are most successful in depolarizing the neuron, and which are most closely correlated with reward, are selected according to a Hebbian or error-maximizing rule (equation 4). The neuron's other ion channels constitute layer 2. The function of layer 2 is to use prior information to predict membrane voltage, and thereby predict the conductance of layer 1 and glutamate concentration as well. The membrane voltage is determined by the difference between the output of layer 1 and its expected output as determined by layer 2 (equation 1), and it therefore functions as a prediction error. In predicting voltage, layer 2 acts to drive voltage towards a point near the middle of its range where the error is zero. The ion channels of layer 2 are selected to perform this function by an anti-Hebbian or error-minimizing rule (equation 3). Many of these ion channels are inhibitory (“−”) and tend to open when the neuron is depolarized, whereas others are excitatory (“+”) and tend to open when the neuron is hyperpolarized. Some are gated by membrane voltage and provide prior temporal information, whereas others are gated by neurotransmitters and contribute prior spatial information.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2553191&req=5

pone-0003298-g001: Schematic illustration of a model neuron.Arrows indicate the direction of information flow. A typical neuron receives inputs from the sensory periphery via glutamate, which depolarizes the membrane potential (“+”). The glutamate-gated ion channels and synapses that mediate this response are referred to as layer 1. They define the neuron's stimulus (the “excitatory center” of its receptive field). The function of layer 1 is to provide current information about the external world. Those individual inputs that are most successful in depolarizing the neuron, and which are most closely correlated with reward, are selected according to a Hebbian or error-maximizing rule (equation 4). The neuron's other ion channels constitute layer 2. The function of layer 2 is to use prior information to predict membrane voltage, and thereby predict the conductance of layer 1 and glutamate concentration as well. The membrane voltage is determined by the difference between the output of layer 1 and its expected output as determined by layer 2 (equation 1), and it therefore functions as a prediction error. In predicting voltage, layer 2 acts to drive voltage towards a point near the middle of its range where the error is zero. The ion channels of layer 2 are selected to perform this function by an anti-Hebbian or error-minimizing rule (equation 3). Many of these ion channels are inhibitory (“−”) and tend to open when the neuron is depolarized, whereas others are excitatory (“+”) and tend to open when the neuron is hyperpolarized. Some are gated by membrane voltage and provide prior temporal information, whereas others are gated by neurotransmitters and contribute prior spatial information.

Mentions: A neuron's information must be about something, and thus we must first define the “subject” of a neuron's information (what it is that a neuron is predicting). Each neuron possesses information about some aspect of the world that I will define as the neuron's “stimulus.” Although the word “stimulus” is often associated with concrete sensory aspects of the world, I use it here in a broader sense that would also apply to the much more abstract subject matter of the information in a high-level cortical or motor neuron. If a neuron is close to the sensory periphery, then it may be relatively straightforward for us to precisely specify its stimulus. For example, a photoreceptor possesses information about the intensity of light of particular wavelengths in a particular region of space. The stimulus of a neuron further from the sensory periphery is more abstract, and as a practical matter it may be difficult for us to specify precisely. However, although each neuron is presumed to possess information about some aspect of the external world (broadly conceived), a neuron must also possess information about its local environment. The proximal surrogate of a neuron's external stimulus is proposed to be the local concentration of a neurotransmitter summed across a set of individual synapses (Fig. 1). For most neurons this would be an excitatory neurotransmitter such as glutamate. A typical neuron is envisioned as being linked to the sensory periphery through a feed-forward series of excitatory neurons. Thus, by possessing information about local glutamate concentration, a neuron would also possess information about its external stimulus.


Towards a general theory of neural computation based on prediction by single neurons.

Fiorillo CD - PLoS ONE (2008)

Schematic illustration of a model neuron.Arrows indicate the direction of information flow. A typical neuron receives inputs from the sensory periphery via glutamate, which depolarizes the membrane potential (“+”). The glutamate-gated ion channels and synapses that mediate this response are referred to as layer 1. They define the neuron's stimulus (the “excitatory center” of its receptive field). The function of layer 1 is to provide current information about the external world. Those individual inputs that are most successful in depolarizing the neuron, and which are most closely correlated with reward, are selected according to a Hebbian or error-maximizing rule (equation 4). The neuron's other ion channels constitute layer 2. The function of layer 2 is to use prior information to predict membrane voltage, and thereby predict the conductance of layer 1 and glutamate concentration as well. The membrane voltage is determined by the difference between the output of layer 1 and its expected output as determined by layer 2 (equation 1), and it therefore functions as a prediction error. In predicting voltage, layer 2 acts to drive voltage towards a point near the middle of its range where the error is zero. The ion channels of layer 2 are selected to perform this function by an anti-Hebbian or error-minimizing rule (equation 3). Many of these ion channels are inhibitory (“−”) and tend to open when the neuron is depolarized, whereas others are excitatory (“+”) and tend to open when the neuron is hyperpolarized. Some are gated by membrane voltage and provide prior temporal information, whereas others are gated by neurotransmitters and contribute prior spatial information.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2553191&req=5

pone-0003298-g001: Schematic illustration of a model neuron.Arrows indicate the direction of information flow. A typical neuron receives inputs from the sensory periphery via glutamate, which depolarizes the membrane potential (“+”). The glutamate-gated ion channels and synapses that mediate this response are referred to as layer 1. They define the neuron's stimulus (the “excitatory center” of its receptive field). The function of layer 1 is to provide current information about the external world. Those individual inputs that are most successful in depolarizing the neuron, and which are most closely correlated with reward, are selected according to a Hebbian or error-maximizing rule (equation 4). The neuron's other ion channels constitute layer 2. The function of layer 2 is to use prior information to predict membrane voltage, and thereby predict the conductance of layer 1 and glutamate concentration as well. The membrane voltage is determined by the difference between the output of layer 1 and its expected output as determined by layer 2 (equation 1), and it therefore functions as a prediction error. In predicting voltage, layer 2 acts to drive voltage towards a point near the middle of its range where the error is zero. The ion channels of layer 2 are selected to perform this function by an anti-Hebbian or error-minimizing rule (equation 3). Many of these ion channels are inhibitory (“−”) and tend to open when the neuron is depolarized, whereas others are excitatory (“+”) and tend to open when the neuron is hyperpolarized. Some are gated by membrane voltage and provide prior temporal information, whereas others are gated by neurotransmitters and contribute prior spatial information.
Mentions: A neuron's information must be about something, and thus we must first define the “subject” of a neuron's information (what it is that a neuron is predicting). Each neuron possesses information about some aspect of the world that I will define as the neuron's “stimulus.” Although the word “stimulus” is often associated with concrete sensory aspects of the world, I use it here in a broader sense that would also apply to the much more abstract subject matter of the information in a high-level cortical or motor neuron. If a neuron is close to the sensory periphery, then it may be relatively straightforward for us to precisely specify its stimulus. For example, a photoreceptor possesses information about the intensity of light of particular wavelengths in a particular region of space. The stimulus of a neuron further from the sensory periphery is more abstract, and as a practical matter it may be difficult for us to specify precisely. However, although each neuron is presumed to possess information about some aspect of the external world (broadly conceived), a neuron must also possess information about its local environment. The proximal surrogate of a neuron's external stimulus is proposed to be the local concentration of a neurotransmitter summed across a set of individual synapses (Fig. 1). For most neurons this would be an excitatory neurotransmitter such as glutamate. A typical neuron is envisioned as being linked to the sensory periphery through a feed-forward series of excitatory neurons. Thus, by possessing information about local glutamate concentration, a neuron would also possess information about its external stimulus.

Bottom Line: To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule.The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world.Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, Stanford University, Stanford, California, USA. chris@monkeybiz.stanford.edu

ABSTRACT
Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.

Show MeSH