Limits...
Chunking or not chunking? How do we find words in artificial language learning?

Franco A, Destrebecqz A - Adv Cogn Psychol (2012)

Bottom Line: Our results indicate that the nature of the representations depends on the learning condition.When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language.However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.

View Article: PubMed Central - PubMed

Affiliation: Cognition, Consciousness, and Computation Group, Université Libre de Bruxelles, Belgium.

ABSTRACT
What is the nature of the representations acquired in implicit statistical learning? Recent results in the field of language learning have shown that adults and infants are able to find the words of an artificial language when exposed to a continuous auditory sequence consisting in a random ordering of these words. Such performance can only be based on processing the transitional probabilities between sequence elements. Two different kinds of mechanisms may account for these data: Participants may either parse the sequence into smaller chunks corresponding to the words of the artificial language, or they may become progressively sensitive to the actual values of the transitional probabilities between syllables. The two accounts are difficult to differentiate because they make similar predictions in comparable experimental settings. In this study, we present two experiments that aimed at contrasting these two theories. In these experiments, participants had to learn 2 sets of pseudo-linguistic regularities: Language 1 (L1) and Language 2 (L2) presented in the context of a serial reaction time task. L1 and L2 were either unrelated (none of the syllabic transitions of L1 were present in L2), or partly related (some of the intra-words transitions of L1 were used as inter-words transitions of L2). The two accounts make opposite predictions in these two settings. Our results indicate that the nature of the representations depends on the learning condition. When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language. However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.

No MeSH data available.


The figure shows mean reaction times (RTs) obtained for unpredictable(Element 1) and predictable elements (Elements 2 and 3) duringLanguage 1 (L1) and Language 2 (L2) blocks. RTs are average overexperimental and control conditions (left panel). Mean percentage ofcorrect responses during the recognition task for words, non-words,and part-words in the control and experimental conditions aredisplayed on the right panel. Chance level = 50%.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3376887&req=5

Figure 3: The figure shows mean reaction times (RTs) obtained for unpredictable(Element 1) and predictable elements (Elements 2 and 3) duringLanguage 1 (L1) and Language 2 (L2) blocks. RTs are average overexperimental and control conditions (left panel). Mean percentage ofcorrect responses during the recognition task for words, non-words,and part-words in the control and experimental conditions aredisplayed on the right panel. Chance level = 50%.

Mentions: Figure 3 (left panel) shows the averageRTs obtained over the entire experiment, plotted separately for each elementof the sequences. As in Experiment 1, control and experimental conditionswere pooled together since there was no difference in performance betweenboth conditions, F(1, 8) = 1.114, p>.1, for L1; and F(1, 8) = 0.042, p> .5, for L2. The results clearly indicate that RTs are stronglyinfluenced by the position: RTs decreased more and were faster forpredictable elements than for unpredictable elements.


Chunking or not chunking? How do we find words in artificial language learning?

Franco A, Destrebecqz A - Adv Cogn Psychol (2012)

The figure shows mean reaction times (RTs) obtained for unpredictable(Element 1) and predictable elements (Elements 2 and 3) duringLanguage 1 (L1) and Language 2 (L2) blocks. RTs are average overexperimental and control conditions (left panel). Mean percentage ofcorrect responses during the recognition task for words, non-words,and part-words in the control and experimental conditions aredisplayed on the right panel. Chance level = 50%.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3376887&req=5

Figure 3: The figure shows mean reaction times (RTs) obtained for unpredictable(Element 1) and predictable elements (Elements 2 and 3) duringLanguage 1 (L1) and Language 2 (L2) blocks. RTs are average overexperimental and control conditions (left panel). Mean percentage ofcorrect responses during the recognition task for words, non-words,and part-words in the control and experimental conditions aredisplayed on the right panel. Chance level = 50%.
Mentions: Figure 3 (left panel) shows the averageRTs obtained over the entire experiment, plotted separately for each elementof the sequences. As in Experiment 1, control and experimental conditionswere pooled together since there was no difference in performance betweenboth conditions, F(1, 8) = 1.114, p>.1, for L1; and F(1, 8) = 0.042, p> .5, for L2. The results clearly indicate that RTs are stronglyinfluenced by the position: RTs decreased more and were faster forpredictable elements than for unpredictable elements.

Bottom Line: Our results indicate that the nature of the representations depends on the learning condition.When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language.However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.

View Article: PubMed Central - PubMed

Affiliation: Cognition, Consciousness, and Computation Group, Université Libre de Bruxelles, Belgium.

ABSTRACT
What is the nature of the representations acquired in implicit statistical learning? Recent results in the field of language learning have shown that adults and infants are able to find the words of an artificial language when exposed to a continuous auditory sequence consisting in a random ordering of these words. Such performance can only be based on processing the transitional probabilities between sequence elements. Two different kinds of mechanisms may account for these data: Participants may either parse the sequence into smaller chunks corresponding to the words of the artificial language, or they may become progressively sensitive to the actual values of the transitional probabilities between syllables. The two accounts are difficult to differentiate because they make similar predictions in comparable experimental settings. In this study, we present two experiments that aimed at contrasting these two theories. In these experiments, participants had to learn 2 sets of pseudo-linguistic regularities: Language 1 (L1) and Language 2 (L2) presented in the context of a serial reaction time task. L1 and L2 were either unrelated (none of the syllabic transitions of L1 were present in L2), or partly related (some of the intra-words transitions of L1 were used as inter-words transitions of L2). The two accounts make opposite predictions in these two settings. Our results indicate that the nature of the representations depends on the learning condition. When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language. However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.

No MeSH data available.