Limits...
Conversational Interaction in the Scanner: Mentalizing during Language Processing as Revealed by MEG.

Bögels S, Barr DJ, Garrod S, Kessler K - Cereb. Cortex (2014)

Bottom Line: Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventromedial prefrontal cortex).The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated.In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances.

View Article: PubMed Central - PubMed

Affiliation: Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.

No MeSH data available.


Related in: MedlinePlus

Panel A: Design and example displays and speech in the interactive (left) and test (right) phases. Stimuli were presented in color during the experiment. The speaker's view was implied by the speaker's behavior without being seen by the participant and is presented here for clarity. Physical stimuli in the test phase were identical for the 4 experimental conditions (same-/different-speaker precedent match/no precedent) over participants, but test trials were never repeated within a single participant. Panel B: Visualization of predictions from the anticipatory and “on-demand” view of mentalizing about which areas are expected to be more active in the same-speaker precedent mismatch than the other conditions during different parts of the test phase.
© Copyright Policy - creative-commons
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4537451&req=5

BHU116F1: Panel A: Design and example displays and speech in the interactive (left) and test (right) phases. Stimuli were presented in color during the experiment. The speaker's view was implied by the speaker's behavior without being seen by the participant and is presented here for clarity. Physical stimuli in the test phase were identical for the 4 experimental conditions (same-/different-speaker precedent match/no precedent) over participants, but test trials were never repeated within a single participant. Panel B: Visualization of predictions from the anticipatory and “on-demand” view of mentalizing about which areas are expected to be more active in the same-speaker precedent mismatch than the other conditions during different parts of the test phase.

Mentions: The absence of neuroimaging studies on conversational language processing reflects the existence of a number of technical and logistical challenges that have imposed a barrier to this kind of research. First, the required signal-to-noise ratio for neuroimaging data analysis typically necessitates a larger number of trials compared with behavioral studies, as well as a high level of control over the stimuli and stimuli presentation timings, in order to reduce any additional sources of variability. This need for large numbers of highly controlled trials is at odds with the characteristics of naturalistic interaction with live conversational partners, where it is difficult to predict what speakers will say and when they will say it. Furthermore, identifying the brain networks involved in the processing of conversational speech requires a neuroimaging technique that provides adequate spatial and temporal resolution. We surmounted these obstacles by using MEG with a novel communication-game paradigm “do-I-see-what-you-mean?” (see Fig. 1, Panel A) that enabled spontaneous, quasi-naturalistic conversation with trained confederates, but which still allowed us full control over stimulus characteristics and timing through interleaving prerecorded speech with live speech. Critically, we implemented this interleaving in a way that would lead participants to believe that they were experiencing a live interaction including only spontaneously produced speech by real participants.Figure 1.


Conversational Interaction in the Scanner: Mentalizing during Language Processing as Revealed by MEG.

Bögels S, Barr DJ, Garrod S, Kessler K - Cereb. Cortex (2014)

Panel A: Design and example displays and speech in the interactive (left) and test (right) phases. Stimuli were presented in color during the experiment. The speaker's view was implied by the speaker's behavior without being seen by the participant and is presented here for clarity. Physical stimuli in the test phase were identical for the 4 experimental conditions (same-/different-speaker precedent match/no precedent) over participants, but test trials were never repeated within a single participant. Panel B: Visualization of predictions from the anticipatory and “on-demand” view of mentalizing about which areas are expected to be more active in the same-speaker precedent mismatch than the other conditions during different parts of the test phase.
© Copyright Policy - creative-commons
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4537451&req=5

BHU116F1: Panel A: Design and example displays and speech in the interactive (left) and test (right) phases. Stimuli were presented in color during the experiment. The speaker's view was implied by the speaker's behavior without being seen by the participant and is presented here for clarity. Physical stimuli in the test phase were identical for the 4 experimental conditions (same-/different-speaker precedent match/no precedent) over participants, but test trials were never repeated within a single participant. Panel B: Visualization of predictions from the anticipatory and “on-demand” view of mentalizing about which areas are expected to be more active in the same-speaker precedent mismatch than the other conditions during different parts of the test phase.
Mentions: The absence of neuroimaging studies on conversational language processing reflects the existence of a number of technical and logistical challenges that have imposed a barrier to this kind of research. First, the required signal-to-noise ratio for neuroimaging data analysis typically necessitates a larger number of trials compared with behavioral studies, as well as a high level of control over the stimuli and stimuli presentation timings, in order to reduce any additional sources of variability. This need for large numbers of highly controlled trials is at odds with the characteristics of naturalistic interaction with live conversational partners, where it is difficult to predict what speakers will say and when they will say it. Furthermore, identifying the brain networks involved in the processing of conversational speech requires a neuroimaging technique that provides adequate spatial and temporal resolution. We surmounted these obstacles by using MEG with a novel communication-game paradigm “do-I-see-what-you-mean?” (see Fig. 1, Panel A) that enabled spontaneous, quasi-naturalistic conversation with trained confederates, but which still allowed us full control over stimulus characteristics and timing through interleaving prerecorded speech with live speech. Critically, we implemented this interleaving in a way that would lead participants to believe that they were experiencing a live interaction including only spontaneously produced speech by real participants.Figure 1.

Bottom Line: Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventromedial prefrontal cortex).The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated.In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances.

View Article: PubMed Central - PubMed

Affiliation: Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.

No MeSH data available.


Related in: MedlinePlus