Limits...
Chunking or not chunking? How do we find words in artificial language learning?

Franco A, Destrebecqz A - Adv Cogn Psychol (2012)

Bottom Line: Our results indicate that the nature of the representations depends on the learning condition.When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language.However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.

View Article: PubMed Central - PubMed

Affiliation: Cognition, Consciousness, and Computation Group, Université Libre de Bruxelles, Belgium.

ABSTRACT
What is the nature of the representations acquired in implicit statistical learning? Recent results in the field of language learning have shown that adults and infants are able to find the words of an artificial language when exposed to a continuous auditory sequence consisting in a random ordering of these words. Such performance can only be based on processing the transitional probabilities between sequence elements. Two different kinds of mechanisms may account for these data: Participants may either parse the sequence into smaller chunks corresponding to the words of the artificial language, or they may become progressively sensitive to the actual values of the transitional probabilities between syllables. The two accounts are difficult to differentiate because they make similar predictions in comparable experimental settings. In this study, we present two experiments that aimed at contrasting these two theories. In these experiments, participants had to learn 2 sets of pseudo-linguistic regularities: Language 1 (L1) and Language 2 (L2) presented in the context of a serial reaction time task. L1 and L2 were either unrelated (none of the syllabic transitions of L1 were present in L2), or partly related (some of the intra-words transitions of L1 were used as inter-words transitions of L2). The two accounts make opposite predictions in these two settings. Our results indicate that the nature of the representations depends on the learning condition. When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language. However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.

No MeSH data available.


Percentage of participants categorized as “verbalizers” in the fiveexperimental groups with randomized and systematic training. In the mI n thecontrol condition, Language 1 (L1) and Language 2 (L2) are unrelated. In theexperimental condition, some of L1 “intra-word” transitions become L2“intra-word”.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3376887&req=5

Figure 1: Percentage of participants categorized as “verbalizers” in the fiveexperimental groups with randomized and systematic training. In the mI n thecontrol condition, Language 1 (L1) and Language 2 (L2) are unrelated. In theexperimental condition, some of L1 “intra-word” transitions become L2“intra-word”.

Mentions: To contrast the predictions of chunking and transition-finding strategies, we used a12-choice SRT task in which the succession of the visual targets implementedstatistical regularities similar to those found in artificial languages. We chooseto use a visuomotor task instead of presenting the artificial language in theauditory modality in order to be able to track the development of statisticallearning through reaction times (see Misyak,Christiansen, & Tomblin, 2010, for a recent similar attempt; see alsoConway & Christiansen, 2009, for asystematic comparison between the auditory and visual modalities). In our version ofthe task, participants had to learn two different artificial languages presentedsuccessively. In our experiments, the first “language” (L1) wascomposed of four “words”, or small two-element sequences, and thesecond “language” (L2) was composed of four small three-elementsequences. In one (control) condition, the two ensembles were not related to eachother, but in the other (experimental) condition, the intra-sequences transitions ofL1 became inter-sequences transitions in L2 (see Figure 1 and Table 1.).


Chunking or not chunking? How do we find words in artificial language learning?

Franco A, Destrebecqz A - Adv Cogn Psychol (2012)

Percentage of participants categorized as “verbalizers” in the fiveexperimental groups with randomized and systematic training. In the mI n thecontrol condition, Language 1 (L1) and Language 2 (L2) are unrelated. In theexperimental condition, some of L1 “intra-word” transitions become L2“intra-word”.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3376887&req=5

Figure 1: Percentage of participants categorized as “verbalizers” in the fiveexperimental groups with randomized and systematic training. In the mI n thecontrol condition, Language 1 (L1) and Language 2 (L2) are unrelated. In theexperimental condition, some of L1 “intra-word” transitions become L2“intra-word”.
Mentions: To contrast the predictions of chunking and transition-finding strategies, we used a12-choice SRT task in which the succession of the visual targets implementedstatistical regularities similar to those found in artificial languages. We chooseto use a visuomotor task instead of presenting the artificial language in theauditory modality in order to be able to track the development of statisticallearning through reaction times (see Misyak,Christiansen, & Tomblin, 2010, for a recent similar attempt; see alsoConway & Christiansen, 2009, for asystematic comparison between the auditory and visual modalities). In our version ofthe task, participants had to learn two different artificial languages presentedsuccessively. In our experiments, the first “language” (L1) wascomposed of four “words”, or small two-element sequences, and thesecond “language” (L2) was composed of four small three-elementsequences. In one (control) condition, the two ensembles were not related to eachother, but in the other (experimental) condition, the intra-sequences transitions ofL1 became inter-sequences transitions in L2 (see Figure 1 and Table 1.).

Bottom Line: Our results indicate that the nature of the representations depends on the learning condition.When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language.However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.

View Article: PubMed Central - PubMed

Affiliation: Cognition, Consciousness, and Computation Group, Université Libre de Bruxelles, Belgium.

ABSTRACT
What is the nature of the representations acquired in implicit statistical learning? Recent results in the field of language learning have shown that adults and infants are able to find the words of an artificial language when exposed to a continuous auditory sequence consisting in a random ordering of these words. Such performance can only be based on processing the transitional probabilities between sequence elements. Two different kinds of mechanisms may account for these data: Participants may either parse the sequence into smaller chunks corresponding to the words of the artificial language, or they may become progressively sensitive to the actual values of the transitional probabilities between syllables. The two accounts are difficult to differentiate because they make similar predictions in comparable experimental settings. In this study, we present two experiments that aimed at contrasting these two theories. In these experiments, participants had to learn 2 sets of pseudo-linguistic regularities: Language 1 (L1) and Language 2 (L2) presented in the context of a serial reaction time task. L1 and L2 were either unrelated (none of the syllabic transitions of L1 were present in L2), or partly related (some of the intra-words transitions of L1 were used as inter-words transitions of L2). The two accounts make opposite predictions in these two settings. Our results indicate that the nature of the representations depends on the learning condition. When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language. However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.

No MeSH data available.