Limits...
Neurocognitive mechanisms of statistical-sequential learning: what do event-related potentials tell us?

Daltrozzo J, Conway CM - Front Hum Neurosci (2014)

Bottom Line: The underlying neurocognitive mechanisms of SL and the associated cognitive representations are still not well understood as reflected by the heterogeneity of the reviewed cognitive models.The review is articulated around three descriptive dimensions in relation to SL: the level of abstractness of the representations learned through SL, the effect of the level of attention and consciousness on SL, and the developmental trajectory of SL across the life-span.We conclude with a new tentative model that takes into account these three dimensions and also point to several promising new lines of SL research.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, Georgia State University Atlanta, GA, USA.

ABSTRACT
Statistical-sequential learning (SL) is the ability to process patterns of environmental stimuli, such as spoken language, music, or one's motor actions, that unfold in time. The underlying neurocognitive mechanisms of SL and the associated cognitive representations are still not well understood as reflected by the heterogeneity of the reviewed cognitive models. The purpose of this review is: (1) to provide a general overview of the primary models and theories of SL, (2) to describe the empirical research - with a focus on the event-related potential (ERP) literature - in support of these models while also highlighting the current limitations of this research, and (3) to present a set of new lines of ERP research to overcome these limitations. The review is articulated around three descriptive dimensions in relation to SL: the level of abstractness of the representations learned through SL, the effect of the level of attention and consciousness on SL, and the developmental trajectory of SL across the life-span. We conclude with a new tentative model that takes into account these three dimensions and also point to several promising new lines of SL research.

No MeSH data available.


Example of an artificial grammar in the visual domain. The algorithm describes the rules of the artificial grammar, that is the set of possible sequences of stimuli (in this case, colored squares) that are valid according the rules of the grammar. Examples of valid sequences (i.e., grammatical sequences containing no syntactic violations) are presented on the bottom of the figure circled in dark. Examples of non-grammatical sequences (containing syntactic violations) are also presented, circled in red.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4061616&req=5

Figure 7: Example of an artificial grammar in the visual domain. The algorithm describes the rules of the artificial grammar, that is the set of possible sequences of stimuli (in this case, colored squares) that are valid according the rules of the grammar. Examples of valid sequences (i.e., grammatical sequences containing no syntactic violations) are presented on the bottom of the figure circled in dark. Examples of non-grammatical sequences (containing syntactic violations) are also presented, circled in red.

Mentions: Artificial grammar learning (AGL) paradigms, which incorporate a set of rules that govern the structure of sequences (Figure 7), have been designed to mimic the complex structure of natural language while simultaneously removing other potentially confounding parameters such as semantic information. Converging evidence has suggested that this experimental design is a good model for testing the grammatical and structural processing of natural language (for a review see Christiansen et al., 2002). It should be noted that the AGL paradigms used in ERP research often incorporate aspects of the SRT paradigm, described above (Nissen and Bullemer, 1987). In such a combined SRT-AGL task, the structure of the sequence of stimuli follows the rules defined by an artificial grammar to determine what stimulus occurs next in the sequence.


Neurocognitive mechanisms of statistical-sequential learning: what do event-related potentials tell us?

Daltrozzo J, Conway CM - Front Hum Neurosci (2014)

Example of an artificial grammar in the visual domain. The algorithm describes the rules of the artificial grammar, that is the set of possible sequences of stimuli (in this case, colored squares) that are valid according the rules of the grammar. Examples of valid sequences (i.e., grammatical sequences containing no syntactic violations) are presented on the bottom of the figure circled in dark. Examples of non-grammatical sequences (containing syntactic violations) are also presented, circled in red.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4061616&req=5

Figure 7: Example of an artificial grammar in the visual domain. The algorithm describes the rules of the artificial grammar, that is the set of possible sequences of stimuli (in this case, colored squares) that are valid according the rules of the grammar. Examples of valid sequences (i.e., grammatical sequences containing no syntactic violations) are presented on the bottom of the figure circled in dark. Examples of non-grammatical sequences (containing syntactic violations) are also presented, circled in red.
Mentions: Artificial grammar learning (AGL) paradigms, which incorporate a set of rules that govern the structure of sequences (Figure 7), have been designed to mimic the complex structure of natural language while simultaneously removing other potentially confounding parameters such as semantic information. Converging evidence has suggested that this experimental design is a good model for testing the grammatical and structural processing of natural language (for a review see Christiansen et al., 2002). It should be noted that the AGL paradigms used in ERP research often incorporate aspects of the SRT paradigm, described above (Nissen and Bullemer, 1987). In such a combined SRT-AGL task, the structure of the sequence of stimuli follows the rules defined by an artificial grammar to determine what stimulus occurs next in the sequence.

Bottom Line: The underlying neurocognitive mechanisms of SL and the associated cognitive representations are still not well understood as reflected by the heterogeneity of the reviewed cognitive models.The review is articulated around three descriptive dimensions in relation to SL: the level of abstractness of the representations learned through SL, the effect of the level of attention and consciousness on SL, and the developmental trajectory of SL across the life-span.We conclude with a new tentative model that takes into account these three dimensions and also point to several promising new lines of SL research.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, Georgia State University Atlanta, GA, USA.

ABSTRACT
Statistical-sequential learning (SL) is the ability to process patterns of environmental stimuli, such as spoken language, music, or one's motor actions, that unfold in time. The underlying neurocognitive mechanisms of SL and the associated cognitive representations are still not well understood as reflected by the heterogeneity of the reviewed cognitive models. The purpose of this review is: (1) to provide a general overview of the primary models and theories of SL, (2) to describe the empirical research - with a focus on the event-related potential (ERP) literature - in support of these models while also highlighting the current limitations of this research, and (3) to present a set of new lines of ERP research to overcome these limitations. The review is articulated around three descriptive dimensions in relation to SL: the level of abstractness of the representations learned through SL, the effect of the level of attention and consciousness on SL, and the developmental trajectory of SL across the life-span. We conclude with a new tentative model that takes into account these three dimensions and also point to several promising new lines of SL research.

No MeSH data available.