Limits...
Towards a mathematical theory of cortical micro-circuits.

George D, Hawkins J - PLoS Comput. Biol. (2009)

Bottom Line: Anatomical data provide a contrasting set of organizational constraints.The combination of these two constraints suggests a theoretically derived interpretation for many anatomical and physiological features and predicts several others.We also discuss how the theory and the circuit can be extended to explain cortical features that are not explained by the current model and describe testable predictions that can be derived from the model.

View Article: PubMed Central - PubMed

Affiliation: Numenta Inc., Redwood City, California, United States of America. dgeorge@numenta.com

ABSTRACT
The theoretical setting of hierarchical Bayesian inference is gaining acceptance as a framework for understanding cortical computation. In this paper, we describe how Bayesian belief propagation in a spatio-temporal hierarchical model, called Hierarchical Temporal Memory (HTM), can lead to a mathematical model for cortical circuits. An HTM node is abstracted using a coincidence detector and a mixture of Markov chains. Bayesian belief propagation equations for such an HTM node define a set of functional constraints for a neuronal implementation. Anatomical data provide a contrasting set of organizational constraints. The combination of these two constraints suggests a theoretically derived interpretation for many anatomical and physiological features and predicts several others. We describe the pattern recognition capabilities of HTM networks and demonstrate the application of the derived circuits for modeling the subjective contour effect. We also discuss how the theory and the circuit can be extended to explain cortical features that are not explained by the current model and describe testable predictions that can be derived from the model.

Show MeSH

Related in: MedlinePlus

Markov chain likelihood circuit.The circuit for calculating the likelihoods of Markov chains based on a sequence of inputs. In this figure there are five possible bottom-up input patterns (c1–c5) and two Markov chains (g1, g2). The circle neurons represent a specific bottom-up coincidence within a learned Markov chain (two Markov chains are shown, one in blue and one in green). Each rectangular neuron represents the likelihood of an entire Markov chain to be passed to a parent node. This circuit implements the dynamic programming Equation 4 in Table 1.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2749218&req=5

pcbi-1000532-g004: Markov chain likelihood circuit.The circuit for calculating the likelihoods of Markov chains based on a sequence of inputs. In this figure there are five possible bottom-up input patterns (c1–c5) and two Markov chains (g1, g2). The circle neurons represent a specific bottom-up coincidence within a learned Markov chain (two Markov chains are shown, one in blue and one in green). Each rectangular neuron represents the likelihood of an entire Markov chain to be passed to a parent node. This circuit implements the dynamic programming Equation 4 in Table 1.

Mentions: Equation 4 can have a very efficient neuronal implementation as shown in Figure 4. The ‘circle’ neurons in this circuit implement the sequence memory of the Markov chains in the HTM node. The connections between the circle neurons implement the transition probabilities of the Markov chain. As the ‘axons’ between these neurons encode a one time-unit delay, the output of a circle neuron is available at the input of the circle neuron that it connects to after one time step. (This is a very limited method of representing time. We will discuss more sophisticated representations of time in a later section.)


Towards a mathematical theory of cortical micro-circuits.

George D, Hawkins J - PLoS Comput. Biol. (2009)

Markov chain likelihood circuit.The circuit for calculating the likelihoods of Markov chains based on a sequence of inputs. In this figure there are five possible bottom-up input patterns (c1–c5) and two Markov chains (g1, g2). The circle neurons represent a specific bottom-up coincidence within a learned Markov chain (two Markov chains are shown, one in blue and one in green). Each rectangular neuron represents the likelihood of an entire Markov chain to be passed to a parent node. This circuit implements the dynamic programming Equation 4 in Table 1.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2749218&req=5

pcbi-1000532-g004: Markov chain likelihood circuit.The circuit for calculating the likelihoods of Markov chains based on a sequence of inputs. In this figure there are five possible bottom-up input patterns (c1–c5) and two Markov chains (g1, g2). The circle neurons represent a specific bottom-up coincidence within a learned Markov chain (two Markov chains are shown, one in blue and one in green). Each rectangular neuron represents the likelihood of an entire Markov chain to be passed to a parent node. This circuit implements the dynamic programming Equation 4 in Table 1.
Mentions: Equation 4 can have a very efficient neuronal implementation as shown in Figure 4. The ‘circle’ neurons in this circuit implement the sequence memory of the Markov chains in the HTM node. The connections between the circle neurons implement the transition probabilities of the Markov chain. As the ‘axons’ between these neurons encode a one time-unit delay, the output of a circle neuron is available at the input of the circle neuron that it connects to after one time step. (This is a very limited method of representing time. We will discuss more sophisticated representations of time in a later section.)

Bottom Line: Anatomical data provide a contrasting set of organizational constraints.The combination of these two constraints suggests a theoretically derived interpretation for many anatomical and physiological features and predicts several others.We also discuss how the theory and the circuit can be extended to explain cortical features that are not explained by the current model and describe testable predictions that can be derived from the model.

View Article: PubMed Central - PubMed

Affiliation: Numenta Inc., Redwood City, California, United States of America. dgeorge@numenta.com

ABSTRACT
The theoretical setting of hierarchical Bayesian inference is gaining acceptance as a framework for understanding cortical computation. In this paper, we describe how Bayesian belief propagation in a spatio-temporal hierarchical model, called Hierarchical Temporal Memory (HTM), can lead to a mathematical model for cortical circuits. An HTM node is abstracted using a coincidence detector and a mixture of Markov chains. Bayesian belief propagation equations for such an HTM node define a set of functional constraints for a neuronal implementation. Anatomical data provide a contrasting set of organizational constraints. The combination of these two constraints suggests a theoretically derived interpretation for many anatomical and physiological features and predicts several others. We describe the pattern recognition capabilities of HTM networks and demonstrate the application of the derived circuits for modeling the subjective contour effect. We also discuss how the theory and the circuit can be extended to explain cortical features that are not explained by the current model and describe testable predictions that can be derived from the model.

Show MeSH
Related in: MedlinePlus