Limits...
Discriminating Non-native Vowels on the Basis of Multimodal, Auditory or Visual Information: Effects on Infants' Looking Patterns and Discrimination.

Ter Schure S, Junge C, Boersma P - Front Psychol (2016)

Bottom Line: This study tested whether infants' phonological perception is shaped by distributions of visual speech as well as by distributions of auditory speech, by comparing learning from multimodal (i.e., auditory-visual), visual-only, or auditory-only information.We used eye tracking to measure effects of distribution and sensory modality on infants' discrimination of the contrast.We propose that by 8 months, infants' native vowel categories are established insofar that learning a novel contrast is supported by attention to additional information, such as visual articulations.

View Article: PubMed Central - PubMed

Affiliation: Linguistics, University of Amsterdam Amsterdam, Netherlands.

ABSTRACT
Infants' perception of speech sound contrasts is modulated by their language environment, for example by the statistical distributions of the speech sounds they hear. Infants learn to discriminate speech sounds better when their input contains a two-peaked frequency distribution of those speech sounds than when their input contains a one-peaked frequency distribution. Effects of frequency distributions on phonetic learning have been tested almost exclusively for auditory input. But auditory speech is usually accompanied by visual information, that is, by visible articulations. This study tested whether infants' phonological perception is shaped by distributions of visual speech as well as by distributions of auditory speech, by comparing learning from multimodal (i.e., auditory-visual), visual-only, or auditory-only information. Dutch 8-month-old infants were exposed to either a one-peaked or two-peaked distribution from a continuum of vowels that formed a contrast in English, but not in Dutch. We used eye tracking to measure effects of distribution and sensory modality on infants' discrimination of the contrast. Although there were no overall effects of distribution or modality, separate t-tests in each of the six training conditions demonstrated significant discrimination of the vowel contrast in the two-peaked multimodal condition. For the modalities where the mouth was visible (visual-only and multimodal) we further examined infant looking patterns for the dynamic speaker's face. Infants in the two-peaked multimodal condition looked longer at her mouth than infants in any of the three other conditions. We propose that by 8 months, infants' native vowel categories are established insofar that learning a novel contrast is supported by attention to additional information, such as visual articulations.

No MeSH data available.


Related in: MedlinePlus

The four regions of interest: eyes, mouth, rest of the face, rest of the screen.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4836047&req=5

Figure 3: The four regions of interest: eyes, mouth, rest of the face, rest of the screen.

Mentions: To investigate infants’ looking behavior over the course of training, we assigned locations of each eye gaze to one ROI as shown in Figure 3: the mouth area, the eyes, the rest of the face, and the rest of the screen. For each training block separately, we then calculated the proportion of looking time spent in the mouth and eyes areas relative to total face area. For each ROI, we performed a repeated-measures analysis of variance on these proportions across training, with training block (1 or 2) as a within-subjects factor and Modality (only multimodal and visual) and Distribution (one- or two-peaked) as between-subjects factors. One infant from the two-peaked visual training group had to be excluded from the analyses because this child did not fixate the face during the second block of training.


Discriminating Non-native Vowels on the Basis of Multimodal, Auditory or Visual Information: Effects on Infants' Looking Patterns and Discrimination.

Ter Schure S, Junge C, Boersma P - Front Psychol (2016)

The four regions of interest: eyes, mouth, rest of the face, rest of the screen.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4836047&req=5

Figure 3: The four regions of interest: eyes, mouth, rest of the face, rest of the screen.
Mentions: To investigate infants’ looking behavior over the course of training, we assigned locations of each eye gaze to one ROI as shown in Figure 3: the mouth area, the eyes, the rest of the face, and the rest of the screen. For each training block separately, we then calculated the proportion of looking time spent in the mouth and eyes areas relative to total face area. For each ROI, we performed a repeated-measures analysis of variance on these proportions across training, with training block (1 or 2) as a within-subjects factor and Modality (only multimodal and visual) and Distribution (one- or two-peaked) as between-subjects factors. One infant from the two-peaked visual training group had to be excluded from the analyses because this child did not fixate the face during the second block of training.

Bottom Line: This study tested whether infants' phonological perception is shaped by distributions of visual speech as well as by distributions of auditory speech, by comparing learning from multimodal (i.e., auditory-visual), visual-only, or auditory-only information.We used eye tracking to measure effects of distribution and sensory modality on infants' discrimination of the contrast.We propose that by 8 months, infants' native vowel categories are established insofar that learning a novel contrast is supported by attention to additional information, such as visual articulations.

View Article: PubMed Central - PubMed

Affiliation: Linguistics, University of Amsterdam Amsterdam, Netherlands.

ABSTRACT
Infants' perception of speech sound contrasts is modulated by their language environment, for example by the statistical distributions of the speech sounds they hear. Infants learn to discriminate speech sounds better when their input contains a two-peaked frequency distribution of those speech sounds than when their input contains a one-peaked frequency distribution. Effects of frequency distributions on phonetic learning have been tested almost exclusively for auditory input. But auditory speech is usually accompanied by visual information, that is, by visible articulations. This study tested whether infants' phonological perception is shaped by distributions of visual speech as well as by distributions of auditory speech, by comparing learning from multimodal (i.e., auditory-visual), visual-only, or auditory-only information. Dutch 8-month-old infants were exposed to either a one-peaked or two-peaked distribution from a continuum of vowels that formed a contrast in English, but not in Dutch. We used eye tracking to measure effects of distribution and sensory modality on infants' discrimination of the contrast. Although there were no overall effects of distribution or modality, separate t-tests in each of the six training conditions demonstrated significant discrimination of the vowel contrast in the two-peaked multimodal condition. For the modalities where the mouth was visible (visual-only and multimodal) we further examined infant looking patterns for the dynamic speaker's face. Infants in the two-peaked multimodal condition looked longer at her mouth than infants in any of the three other conditions. We propose that by 8 months, infants' native vowel categories are established insofar that learning a novel contrast is supported by attention to additional information, such as visual articulations.

No MeSH data available.


Related in: MedlinePlus