Limits...
Multisensory and modality specific processing of visual speech in different regions of the premotor cortex.

Callan DE, Jones JA, Callan A - Front Psychol (2014)

Bottom Line: The left inferior parietal lobule and right cerebellum also showed these properties.The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas.The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures.

View Article: PubMed Central - PubMed

Affiliation: Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka University Osaka, Japan ; Multisensory Cognition and Computation Laboratory Universal Communication Research Institute, National Institute of Information and Communications Technology Kyoto, Japan.

ABSTRACT
Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action ("Mirror System" properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures.

No MeSH data available.


Related in: MedlinePlus

Behavioral results showing the interaction of audio-visual enhancement at each of the signal-to-noise ratios SNRs. The interaction of (AV6-A6)-(AV10-A10) was statistically significant [F(1, 15) = 12.6, p < 0.005]; however the interaction of (AV10-A10)-(AV14-A14) was not significant [F(1, 15) = 3.9, p > 0.05].
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4017150&req=5

Figure 2: Behavioral results showing the interaction of audio-visual enhancement at each of the signal-to-noise ratios SNRs. The interaction of (AV6-A6)-(AV10-A10) was statistically significant [F(1, 15) = 12.6, p < 0.005]; however the interaction of (AV10-A10)-(AV14-A14) was not significant [F(1, 15) = 3.9, p > 0.05].

Mentions: A Two-Way analysis of Variance ANOVA was conducted over factors of Modality (with levels audio-visual and audio only) and SNR (with levels −6, −10, and −14 dB). Bonferroni corrections for multiple comparisons were used to determine statistical significance at p < 0.05 for planned ANOVA interaction and pairwise comparison analyses. In total there were seven planned analyses. The omnibus ANOVA indicated significant interaction between Modality and SNR, F(2, 95) = 7.1, p < 0.05; and significant main effects of Modality (AV > A), F(1, 95) = 179.2, p < 0.05, and SNR, F(2, 95) = 15.49, p < 0.05. Planned pairwise comparisons (corrected for multiple comparisons) indicated statistically significant differences between the AV conditions and the A conditions (AV6-A6: T = 5.79, p < 0.05; AV10-A10: T = 14.13, p < 0.05, AV14-A14: T = 14.2, p < 0.05; AV > A: T = 18.5, p < 0.05; AV not significantly different from VO: T = 0.69; see Figures 1, 2). The planned interaction analyses are given below.


Multisensory and modality specific processing of visual speech in different regions of the premotor cortex.

Callan DE, Jones JA, Callan A - Front Psychol (2014)

Behavioral results showing the interaction of audio-visual enhancement at each of the signal-to-noise ratios SNRs. The interaction of (AV6-A6)-(AV10-A10) was statistically significant [F(1, 15) = 12.6, p < 0.005]; however the interaction of (AV10-A10)-(AV14-A14) was not significant [F(1, 15) = 3.9, p > 0.05].
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4017150&req=5

Figure 2: Behavioral results showing the interaction of audio-visual enhancement at each of the signal-to-noise ratios SNRs. The interaction of (AV6-A6)-(AV10-A10) was statistically significant [F(1, 15) = 12.6, p < 0.005]; however the interaction of (AV10-A10)-(AV14-A14) was not significant [F(1, 15) = 3.9, p > 0.05].
Mentions: A Two-Way analysis of Variance ANOVA was conducted over factors of Modality (with levels audio-visual and audio only) and SNR (with levels −6, −10, and −14 dB). Bonferroni corrections for multiple comparisons were used to determine statistical significance at p < 0.05 for planned ANOVA interaction and pairwise comparison analyses. In total there were seven planned analyses. The omnibus ANOVA indicated significant interaction between Modality and SNR, F(2, 95) = 7.1, p < 0.05; and significant main effects of Modality (AV > A), F(1, 95) = 179.2, p < 0.05, and SNR, F(2, 95) = 15.49, p < 0.05. Planned pairwise comparisons (corrected for multiple comparisons) indicated statistically significant differences between the AV conditions and the A conditions (AV6-A6: T = 5.79, p < 0.05; AV10-A10: T = 14.13, p < 0.05, AV14-A14: T = 14.2, p < 0.05; AV > A: T = 18.5, p < 0.05; AV not significantly different from VO: T = 0.69; see Figures 1, 2). The planned interaction analyses are given below.

Bottom Line: The left inferior parietal lobule and right cerebellum also showed these properties.The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas.The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures.

View Article: PubMed Central - PubMed

Affiliation: Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka University Osaka, Japan ; Multisensory Cognition and Computation Laboratory Universal Communication Research Institute, National Institute of Information and Communications Technology Kyoto, Japan.

ABSTRACT
Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action ("Mirror System" properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures.

No MeSH data available.


Related in: MedlinePlus