Limits...
Towards a mathematical theory of cortical micro-circuits.

George D, Hawkins J - PLoS Comput. Biol. (2009)

Bottom Line: Anatomical data provide a contrasting set of organizational constraints.The combination of these two constraints suggests a theoretically derived interpretation for many anatomical and physiological features and predicts several others.We also discuss how the theory and the circuit can be extended to explain cortical features that are not explained by the current model and describe testable predictions that can be derived from the model.

View Article: PubMed Central - PubMed

Affiliation: Numenta Inc., Redwood City, California, United States of America. dgeorge@numenta.com

ABSTRACT
The theoretical setting of hierarchical Bayesian inference is gaining acceptance as a framework for understanding cortical computation. In this paper, we describe how Bayesian belief propagation in a spatio-temporal hierarchical model, called Hierarchical Temporal Memory (HTM), can lead to a mathematical model for cortical circuits. An HTM node is abstracted using a coincidence detector and a mixture of Markov chains. Bayesian belief propagation equations for such an HTM node define a set of functional constraints for a neuronal implementation. Anatomical data provide a contrasting set of organizational constraints. The combination of these two constraints suggests a theoretically derived interpretation for many anatomical and physiological features and predicts several others. We describe the pattern recognition capabilities of HTM networks and demonstrate the application of the derived circuits for modeling the subjective contour effect. We also discuss how the theory and the circuit can be extended to explain cortical features that are not explained by the current model and describe testable predictions that can be derived from the model.

Show MeSH

Related in: MedlinePlus

Top-down segmentation.Figures A and B show the effect of top-down propagation in HTM networks. The top half of each figure shows the original image submitted to the HTM, along with blue bars illustrating the recognition scores on the top five of the eight categories on which the network was trained. The bottom-left panel in each figure shows the input image after Gabor filtering. The bottom-right panel in each figure shows the image obtained after the feedback propagation of the winning category at the top of the HTM network. In these Gabor-space images, the colors illustrate different orientations, but the details of the color map are not pertinent. A). The input image has a car superposed on background clutter. The network recognizes the car. Top-down propagation segments out the car's contours from that of the background. B). The input image contains multiple objects superposed on a cluttered background and with some foreground occlusions. The network recognition result identifies teddy bear as the top category. Feedback propagation of this winning category correctly isolates the contours corresponding to the teddy bear.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2749218&req=5

pcbi-1000532-g011: Top-down segmentation.Figures A and B show the effect of top-down propagation in HTM networks. The top half of each figure shows the original image submitted to the HTM, along with blue bars illustrating the recognition scores on the top five of the eight categories on which the network was trained. The bottom-left panel in each figure shows the input image after Gabor filtering. The bottom-right panel in each figure shows the image obtained after the feedback propagation of the winning category at the top of the HTM network. In these Gabor-space images, the colors illustrate different orientations, but the details of the color map are not pertinent. A). The input image has a car superposed on background clutter. The network recognizes the car. Top-down propagation segments out the car's contours from that of the background. B). The input image contains multiple objects superposed on a cluttered background and with some foreground occlusions. The network recognition result identifies teddy bear as the top category. Feedback propagation of this winning category correctly isolates the contours corresponding to the teddy bear.

Mentions: We have also done experiments using feedback propagation in HTMs. The goal of these experiments was to verify that top-down propagation in HTMs can be used to locate and segment out objects in cluttered scenes with multiple objects. Figure 11 shows the results of inference and top-down propagation in a network that was trained on eight categories of images. During training, the objects were shown in isolation on a clean background. The test images contained multiple novel objects superposed on busy backgrounds. In most cases, one of the objects in the test image was the top result in the inference. Feedback propagation is initiated from the top of the network after the first flow of feed-forward propagation. After bottom-up propagation, the belief vector at the top of the network is modified such that the winning coincidence has strength one and all other coincidences have strength zero. This message is then propagated down in the network by combining with bottom-up information in the rest of the levels of the hierarchy. The resultant image obtained at the lowest level of the network isolates the contours of the recognized image from the background clutter and from other objects in the scene. These experiments show how top-down propagation in the current model can be used for segmentation, for the assignment of border-ownership, and for the ‘binding’ of features corresponding to a top-level hypothesis [62]. More examples of top-down propagation are available at http://www.numenta.com/links/top_down.php


Towards a mathematical theory of cortical micro-circuits.

George D, Hawkins J - PLoS Comput. Biol. (2009)

Top-down segmentation.Figures A and B show the effect of top-down propagation in HTM networks. The top half of each figure shows the original image submitted to the HTM, along with blue bars illustrating the recognition scores on the top five of the eight categories on which the network was trained. The bottom-left panel in each figure shows the input image after Gabor filtering. The bottom-right panel in each figure shows the image obtained after the feedback propagation of the winning category at the top of the HTM network. In these Gabor-space images, the colors illustrate different orientations, but the details of the color map are not pertinent. A). The input image has a car superposed on background clutter. The network recognizes the car. Top-down propagation segments out the car's contours from that of the background. B). The input image contains multiple objects superposed on a cluttered background and with some foreground occlusions. The network recognition result identifies teddy bear as the top category. Feedback propagation of this winning category correctly isolates the contours corresponding to the teddy bear.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2749218&req=5

pcbi-1000532-g011: Top-down segmentation.Figures A and B show the effect of top-down propagation in HTM networks. The top half of each figure shows the original image submitted to the HTM, along with blue bars illustrating the recognition scores on the top five of the eight categories on which the network was trained. The bottom-left panel in each figure shows the input image after Gabor filtering. The bottom-right panel in each figure shows the image obtained after the feedback propagation of the winning category at the top of the HTM network. In these Gabor-space images, the colors illustrate different orientations, but the details of the color map are not pertinent. A). The input image has a car superposed on background clutter. The network recognizes the car. Top-down propagation segments out the car's contours from that of the background. B). The input image contains multiple objects superposed on a cluttered background and with some foreground occlusions. The network recognition result identifies teddy bear as the top category. Feedback propagation of this winning category correctly isolates the contours corresponding to the teddy bear.
Mentions: We have also done experiments using feedback propagation in HTMs. The goal of these experiments was to verify that top-down propagation in HTMs can be used to locate and segment out objects in cluttered scenes with multiple objects. Figure 11 shows the results of inference and top-down propagation in a network that was trained on eight categories of images. During training, the objects were shown in isolation on a clean background. The test images contained multiple novel objects superposed on busy backgrounds. In most cases, one of the objects in the test image was the top result in the inference. Feedback propagation is initiated from the top of the network after the first flow of feed-forward propagation. After bottom-up propagation, the belief vector at the top of the network is modified such that the winning coincidence has strength one and all other coincidences have strength zero. This message is then propagated down in the network by combining with bottom-up information in the rest of the levels of the hierarchy. The resultant image obtained at the lowest level of the network isolates the contours of the recognized image from the background clutter and from other objects in the scene. These experiments show how top-down propagation in the current model can be used for segmentation, for the assignment of border-ownership, and for the ‘binding’ of features corresponding to a top-level hypothesis [62]. More examples of top-down propagation are available at http://www.numenta.com/links/top_down.php

Bottom Line: Anatomical data provide a contrasting set of organizational constraints.The combination of these two constraints suggests a theoretically derived interpretation for many anatomical and physiological features and predicts several others.We also discuss how the theory and the circuit can be extended to explain cortical features that are not explained by the current model and describe testable predictions that can be derived from the model.

View Article: PubMed Central - PubMed

Affiliation: Numenta Inc., Redwood City, California, United States of America. dgeorge@numenta.com

ABSTRACT
The theoretical setting of hierarchical Bayesian inference is gaining acceptance as a framework for understanding cortical computation. In this paper, we describe how Bayesian belief propagation in a spatio-temporal hierarchical model, called Hierarchical Temporal Memory (HTM), can lead to a mathematical model for cortical circuits. An HTM node is abstracted using a coincidence detector and a mixture of Markov chains. Bayesian belief propagation equations for such an HTM node define a set of functional constraints for a neuronal implementation. Anatomical data provide a contrasting set of organizational constraints. The combination of these two constraints suggests a theoretically derived interpretation for many anatomical and physiological features and predicts several others. We describe the pattern recognition capabilities of HTM networks and demonstrate the application of the derived circuits for modeling the subjective contour effect. We also discuss how the theory and the circuit can be extended to explain cortical features that are not explained by the current model and describe testable predictions that can be derived from the model.

Show MeSH
Related in: MedlinePlus