Limits...
Emergence of category-level sensitivities in non-native speech sound learning.

Myers EB - Front Neurosci (2014)

Bottom Line: First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning?Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning?Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

View Article: PubMed Central - PubMed

Affiliation: Department of Speech, Language, and Hearing Sciences, University of Connecticut Storrs, CT, USA ; Department of Psychology, University of Connecticut Storrs, CT, USA ; Haskins Laboratories New Haven, CT, USA.

ABSTRACT
Over the course of development, speech sounds that are contrastive in one's native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

No MeSH data available.


Neural systems for the perception and learning of speech sound categories. Fine-grained sensitivity to acoustic dimensions that distinguish native speech sounds (e.g., VOT) is found in the posterior superior temporal gyrus (pSTG) and superior temporal sulcus (STS), which includes preferential sensitivity to speech categories, but, to a lesser degree, also sensitivity to within-category variation. In perception, sounds which are not well-categorized by this tuning (e.g., ambiguous sounds) feed forward to categorical-level coding in the frontal lobe (1). For non-native category learning which relies on top-down feedback, category sensitivities may emerge first in the frontal lobe, then feed back to posterior temporal areas to guide long-term changes in perceptual sensitivity (2). This frontal-to-temporal feedback corresponds to the top-down learning route shown in the bottom left portion of Figure 1.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4125857&req=5

Figure 2: Neural systems for the perception and learning of speech sound categories. Fine-grained sensitivity to acoustic dimensions that distinguish native speech sounds (e.g., VOT) is found in the posterior superior temporal gyrus (pSTG) and superior temporal sulcus (STS), which includes preferential sensitivity to speech categories, but, to a lesser degree, also sensitivity to within-category variation. In perception, sounds which are not well-categorized by this tuning (e.g., ambiguous sounds) feed forward to categorical-level coding in the frontal lobe (1). For non-native category learning which relies on top-down feedback, category sensitivities may emerge first in the frontal lobe, then feed back to posterior temporal areas to guide long-term changes in perceptual sensitivity (2). This frontal-to-temporal feedback corresponds to the top-down learning route shown in the bottom left portion of Figure 1.

Mentions: Whether the codes accessed in the inferior frontal lobes are articulatory or abstract in nature, evidence suggests that coding in the left prefrontal areas is more categorical than that represented in the temporal lobe. This suggests an architecture whereby fine-grained acoustic-phonetic details of the speech stream are processed in the left STG/STS, and this information is then projected forward to prefrontal regions to consult with categorical-level codes in a complex of frontal areas (Figure 2).


Emergence of category-level sensitivities in non-native speech sound learning.

Myers EB - Front Neurosci (2014)

Neural systems for the perception and learning of speech sound categories. Fine-grained sensitivity to acoustic dimensions that distinguish native speech sounds (e.g., VOT) is found in the posterior superior temporal gyrus (pSTG) and superior temporal sulcus (STS), which includes preferential sensitivity to speech categories, but, to a lesser degree, also sensitivity to within-category variation. In perception, sounds which are not well-categorized by this tuning (e.g., ambiguous sounds) feed forward to categorical-level coding in the frontal lobe (1). For non-native category learning which relies on top-down feedback, category sensitivities may emerge first in the frontal lobe, then feed back to posterior temporal areas to guide long-term changes in perceptual sensitivity (2). This frontal-to-temporal feedback corresponds to the top-down learning route shown in the bottom left portion of Figure 1.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4125857&req=5

Figure 2: Neural systems for the perception and learning of speech sound categories. Fine-grained sensitivity to acoustic dimensions that distinguish native speech sounds (e.g., VOT) is found in the posterior superior temporal gyrus (pSTG) and superior temporal sulcus (STS), which includes preferential sensitivity to speech categories, but, to a lesser degree, also sensitivity to within-category variation. In perception, sounds which are not well-categorized by this tuning (e.g., ambiguous sounds) feed forward to categorical-level coding in the frontal lobe (1). For non-native category learning which relies on top-down feedback, category sensitivities may emerge first in the frontal lobe, then feed back to posterior temporal areas to guide long-term changes in perceptual sensitivity (2). This frontal-to-temporal feedback corresponds to the top-down learning route shown in the bottom left portion of Figure 1.
Mentions: Whether the codes accessed in the inferior frontal lobes are articulatory or abstract in nature, evidence suggests that coding in the left prefrontal areas is more categorical than that represented in the temporal lobe. This suggests an architecture whereby fine-grained acoustic-phonetic details of the speech stream are processed in the left STG/STS, and this information is then projected forward to prefrontal regions to consult with categorical-level codes in a complex of frontal areas (Figure 2).

Bottom Line: First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning?Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning?Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

View Article: PubMed Central - PubMed

Affiliation: Department of Speech, Language, and Hearing Sciences, University of Connecticut Storrs, CT, USA ; Department of Psychology, University of Connecticut Storrs, CT, USA ; Haskins Laboratories New Haven, CT, USA.

ABSTRACT
Over the course of development, speech sounds that are contrastive in one's native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

No MeSH data available.