Limits...
Attentional influences on functional mapping of speech sounds in human auditory cortex.

Obleser J, Elbert T, Eulitz C - BMC Neurosci (2004)

Bottom Line: In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations.These results imply that the N100m reflects the extraction of abstract invariants from the speech signal.The relative activation of the parallel processing stages can be modulated by attentional or task demands.

View Article: PubMed Central - HTML - PubMed

Affiliation: Department of Psychology, University of Konstanz, Germany. jonas.obleser@uni-konstanz.de

ABSTRACT

Background: The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content), and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization), while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization). Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG), and magnetic source imaging was obtained for 17 subjects.

Results: During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations.

Conclusions: These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands.

Show MeSH
Grand average (N = 21) of root mean squared amplitudes for all conditions over time separately for left (upper panel) and right hemisphere (lower panel). N100m is clearly the most prominent waveform deflection, and the repeatedly reported N100m time lag between coronal vowel [ø] (black) and dorsal vowel [o] (gray) is also obvious.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC503386&req=5

Figure 2: Grand average (N = 21) of root mean squared amplitudes for all conditions over time separately for left (upper panel) and right hemisphere (lower panel). N100m is clearly the most prominent waveform deflection, and the repeatedly reported N100m time lag between coronal vowel [ø] (black) and dorsal vowel [o] (gray) is also obvious.

Mentions: In 21 of 22 subjects, a clear waveform deflection around 100 ms post vowel onset was observed (Fig. 2) in all conditions over both hemispheres and sensor space parameters peak latency and amplitude were obtained. Satisfying and physiologically plausible dipole fits (see methods) in both hemispheres could be obtained in 17 subjects and were subjected to statistical analysis.


Attentional influences on functional mapping of speech sounds in human auditory cortex.

Obleser J, Elbert T, Eulitz C - BMC Neurosci (2004)

Grand average (N = 21) of root mean squared amplitudes for all conditions over time separately for left (upper panel) and right hemisphere (lower panel). N100m is clearly the most prominent waveform deflection, and the repeatedly reported N100m time lag between coronal vowel [ø] (black) and dorsal vowel [o] (gray) is also obvious.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC503386&req=5

Figure 2: Grand average (N = 21) of root mean squared amplitudes for all conditions over time separately for left (upper panel) and right hemisphere (lower panel). N100m is clearly the most prominent waveform deflection, and the repeatedly reported N100m time lag between coronal vowel [ø] (black) and dorsal vowel [o] (gray) is also obvious.
Mentions: In 21 of 22 subjects, a clear waveform deflection around 100 ms post vowel onset was observed (Fig. 2) in all conditions over both hemispheres and sensor space parameters peak latency and amplitude were obtained. Satisfying and physiologically plausible dipole fits (see methods) in both hemispheres could be obtained in 17 subjects and were subjected to statistical analysis.

Bottom Line: In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations.These results imply that the N100m reflects the extraction of abstract invariants from the speech signal.The relative activation of the parallel processing stages can be modulated by attentional or task demands.

View Article: PubMed Central - HTML - PubMed

Affiliation: Department of Psychology, University of Konstanz, Germany. jonas.obleser@uni-konstanz.de

ABSTRACT

Background: The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content), and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization), while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization). Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG), and magnetic source imaging was obtained for 17 subjects.

Results: During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations.

Conclusions: These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands.

Show MeSH