Limits...
Pattern activation/recognition theory of mind.

du Castel B - Front Comput Neurosci (2015)

Bottom Line: I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations.I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit.I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

View Article: PubMed Central - PubMed

Affiliation: Schlumberger Research Houston, TX, USA.

ABSTRACT
In his 2012 book How to Create a Mind, Ray Kurzweil defines a "Pattern Recognition Theory of Mind" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call "Pattern Activation/Recognition Theory of Mind." While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

No MeSH data available.


Varying terminals. The first grammar and its neural circuit produce binary digit sequences. The second grammar is the same as the first one except for its terminals. Instead of producing digits 0 and 1, it recognizes squares and produces circles. It does that following the same pattern, where production of 0 is replaced by recognition of a square, and production of 1 by production of a circle. As will be discussed further down, this constitutes a metaphor, applying a set pattern (expressed by the non-terminal part of the grammar) from one domain (digits) to another (geometrical figures) by varying the terminals.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4502584&req=5

Figure 7: Varying terminals. The first grammar and its neural circuit produce binary digit sequences. The second grammar is the same as the first one except for its terminals. Instead of producing digits 0 and 1, it recognizes squares and produces circles. It does that following the same pattern, where production of 0 is replaced by recognition of a square, and production of 1 by production of a circle. As will be discussed further down, this constitutes a metaphor, applying a set pattern (expressed by the non-terminal part of the grammar) from one domain (digits) to another (geometrical figures) by varying the terminals.

Mentions: Turing's infinite sequence is actually an enumeration of the natural number set, perhaps not a biological object. However, recognition of numbers is certainly a common and early human activity (Libertus et al., 2011). Grammar “A = B / C. B = 0/1. C = B A.” expresses any digit sequence; the first rule differentiates finishing a sequence and pursuing one, the second rule produces digits, and the third rule adds to sequences. Following the same pattern, but with different terminals, grammar “A = B / C. B = DrawSquare/DrawCircle. C = B A.” produces a number of squares and circles in sequences mapping the production of digits, grammar “A = B / C. B = SpotSquare/SpotCircle. C = B A.” recognizes such a sequence, and grammar “A = B / C. B = SpotSquare/DrawCircle. C = B A.” mixes identifying squares and producing circles. The non-terminal grammar part common to digits and geometrical figures expresses a metaphor from one domain (digits) to another (figures), a subject I will come back to further on (Figure 7).


Pattern activation/recognition theory of mind.

du Castel B - Front Comput Neurosci (2015)

Varying terminals. The first grammar and its neural circuit produce binary digit sequences. The second grammar is the same as the first one except for its terminals. Instead of producing digits 0 and 1, it recognizes squares and produces circles. It does that following the same pattern, where production of 0 is replaced by recognition of a square, and production of 1 by production of a circle. As will be discussed further down, this constitutes a metaphor, applying a set pattern (expressed by the non-terminal part of the grammar) from one domain (digits) to another (geometrical figures) by varying the terminals.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4502584&req=5

Figure 7: Varying terminals. The first grammar and its neural circuit produce binary digit sequences. The second grammar is the same as the first one except for its terminals. Instead of producing digits 0 and 1, it recognizes squares and produces circles. It does that following the same pattern, where production of 0 is replaced by recognition of a square, and production of 1 by production of a circle. As will be discussed further down, this constitutes a metaphor, applying a set pattern (expressed by the non-terminal part of the grammar) from one domain (digits) to another (geometrical figures) by varying the terminals.
Mentions: Turing's infinite sequence is actually an enumeration of the natural number set, perhaps not a biological object. However, recognition of numbers is certainly a common and early human activity (Libertus et al., 2011). Grammar “A = B / C. B = 0/1. C = B A.” expresses any digit sequence; the first rule differentiates finishing a sequence and pursuing one, the second rule produces digits, and the third rule adds to sequences. Following the same pattern, but with different terminals, grammar “A = B / C. B = DrawSquare/DrawCircle. C = B A.” produces a number of squares and circles in sequences mapping the production of digits, grammar “A = B / C. B = SpotSquare/SpotCircle. C = B A.” recognizes such a sequence, and grammar “A = B / C. B = SpotSquare/DrawCircle. C = B A.” mixes identifying squares and producing circles. The non-terminal grammar part common to digits and geometrical figures expresses a metaphor from one domain (digits) to another (figures), a subject I will come back to further on (Figure 7).

Bottom Line: I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations.I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit.I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

View Article: PubMed Central - PubMed

Affiliation: Schlumberger Research Houston, TX, USA.

ABSTRACT
In his 2012 book How to Create a Mind, Ray Kurzweil defines a "Pattern Recognition Theory of Mind" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call "Pattern Activation/Recognition Theory of Mind." While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

No MeSH data available.