Limits...
Pattern activation/recognition theory of mind.

du Castel B - Front Comput Neurosci (2015)

Bottom Line: I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations.I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit.I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

View Article: PubMed Central - PubMed

Affiliation: Schlumberger Research Houston, TX, USA.

ABSTRACT
In his 2012 book How to Create a Mind, Ray Kurzweil defines a "Pattern Recognition Theory of Mind" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call "Pattern Activation/Recognition Theory of Mind." While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

No MeSH data available.


Actualization of a metaphor. The first grammar counts two squares, and is described by the second grammar. Each time the first grammar outputs a square, the second grammar transforms it into a circle, thereby allowing counting circles instead of squares. In the second neural circuit, the left part is the original square-counting circuit, the middle part activates the square-counting circuit, and the right part transforms counting squares into counting circles.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4502584&req=5

Figure 14: Actualization of a metaphor. The first grammar counts two squares, and is described by the second grammar. Each time the first grammar outputs a square, the second grammar transforms it into a circle, thereby allowing counting circles instead of squares. In the second neural circuit, the left part is the original square-counting circuit, the middle part activates the square-counting circuit, and the right part transforms counting squares into counting circles.

Mentions: Regarding metaphors, a pattern can be described by another one, while the describing grammar can also perform other operations, a feature I have already used for swarms. For the sake of presentation, I consider a very simple pattern, that of counting two squares, done by grammar “A = B C. B = DrawSquare. C = DrawSquare.” This grammar is described by grammar “A = B C D. B = QuoteAE. E = QuoteB QuoteC. C = QuoteBF. F = DrawSquare. D = QuoteCG. G = DrawSquare.” This describing grammar can be augmented to draw a circle each time it detects a square, with “A = B C D. B = QuoteAE. E = QuoteB QuoteC. C = QuoteBF. F = UnquoteDrawSquare DrawCircle. D = QuoteCG. G = UnquoteDrawSquare DrawCircle.” The unquote operator is the reverse of the quote operator. With the addition of the new rules, instead of producing a square, the described grammar forwards the drawing operation to the describing grammar that then produces a circle. In other words, the describing grammar retargets the counting pattern from one domain (squares) to another (circles), which is the essence of metaphors, defined as “a cross-domain mapping in the conceptual system” (Lakoff, 1979). Of course, this is illustrating only the basic mechanism which metaphors rely on, while I have published elsewhere with Yi Mao a more complete account, using Montague grammars (Montague, 1974; du Castel and Mao, 2006) (Figure 14).


Pattern activation/recognition theory of mind.

du Castel B - Front Comput Neurosci (2015)

Actualization of a metaphor. The first grammar counts two squares, and is described by the second grammar. Each time the first grammar outputs a square, the second grammar transforms it into a circle, thereby allowing counting circles instead of squares. In the second neural circuit, the left part is the original square-counting circuit, the middle part activates the square-counting circuit, and the right part transforms counting squares into counting circles.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4502584&req=5

Figure 14: Actualization of a metaphor. The first grammar counts two squares, and is described by the second grammar. Each time the first grammar outputs a square, the second grammar transforms it into a circle, thereby allowing counting circles instead of squares. In the second neural circuit, the left part is the original square-counting circuit, the middle part activates the square-counting circuit, and the right part transforms counting squares into counting circles.
Mentions: Regarding metaphors, a pattern can be described by another one, while the describing grammar can also perform other operations, a feature I have already used for swarms. For the sake of presentation, I consider a very simple pattern, that of counting two squares, done by grammar “A = B C. B = DrawSquare. C = DrawSquare.” This grammar is described by grammar “A = B C D. B = QuoteAE. E = QuoteB QuoteC. C = QuoteBF. F = DrawSquare. D = QuoteCG. G = DrawSquare.” This describing grammar can be augmented to draw a circle each time it detects a square, with “A = B C D. B = QuoteAE. E = QuoteB QuoteC. C = QuoteBF. F = UnquoteDrawSquare DrawCircle. D = QuoteCG. G = UnquoteDrawSquare DrawCircle.” The unquote operator is the reverse of the quote operator. With the addition of the new rules, instead of producing a square, the described grammar forwards the drawing operation to the describing grammar that then produces a circle. In other words, the describing grammar retargets the counting pattern from one domain (squares) to another (circles), which is the essence of metaphors, defined as “a cross-domain mapping in the conceptual system” (Lakoff, 1979). Of course, this is illustrating only the basic mechanism which metaphors rely on, while I have published elsewhere with Yi Mao a more complete account, using Montague grammars (Montague, 1974; du Castel and Mao, 2006) (Figure 14).

Bottom Line: I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations.I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit.I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

View Article: PubMed Central - PubMed

Affiliation: Schlumberger Research Houston, TX, USA.

ABSTRACT
In his 2012 book How to Create a Mind, Ray Kurzweil defines a "Pattern Recognition Theory of Mind" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call "Pattern Activation/Recognition Theory of Mind." While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

No MeSH data available.