Limits...
Storing upright turns: how visual and vestibular cues interact during the encoding and recalling process.

Vidal M, Bülthoff HH - Exp Brain Res (2009)

Bottom Line: First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles).Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions.Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.

View Article: PubMed Central - PubMed

Affiliation: Max Planck Institute for Biological Cybernetics, Tübingen, Germany. manuel.vidal@college-de-france.fr

ABSTRACT
Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.

Show MeSH
The reproduced amplitudes for visual only reproduction (V to V and VB to V conditions) and body only reproduction (B to B and VB to B conditions), as a function of the presented turn angle. Dashed lines show the expected correct visual (light gray) and body (dark gray) reproductions. Error bars show the standard errors across subjects
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2800859&req=5

Fig5: The reproduced amplitudes for visual only reproduction (V to V and VB to V conditions) and body only reproduction (B to B and VB to B conditions), as a function of the presented turn angle. Dashed lines show the expected correct visual (light gray) and body (dark gray) reproductions. Error bars show the standard errors across subjects

Mentions: In order to see whether the two sensory cues interact at the encoding stage, we looked at the recalling of turns in each sensory context (visual or body) and compared the performance for unimodal and bimodal turn presentations. Figure 5 shows the reproduced turn amplitude for visual reproduction and body reproduction as a function of the presented turn angle. For visual reproduction of turns, there was no significant main effect of the encoding sensory context (V to V vs. VB to V, F(1,11) = 0.79; p = 0.39) and interaction with the turn angle (F(2,22) = 0.25; p = 0.78). Similarly, for bodily reproduction of turns, there was no significant main effect of the encoding sensory context (B to B vs. VB to B, F(1,11) = 0.67; p = 0.43) and interaction with the turn angle (F(2,22) = 1.41; p = 0.26). The reproduction dynamics did not differ according to the presentation sensory context: there was no significant difference at all between bimodal and unimodal encoding for both maximum angular velocities and motion durations (see Fig. 4).Fig. 5


Storing upright turns: how visual and vestibular cues interact during the encoding and recalling process.

Vidal M, Bülthoff HH - Exp Brain Res (2009)

The reproduced amplitudes for visual only reproduction (V to V and VB to V conditions) and body only reproduction (B to B and VB to B conditions), as a function of the presented turn angle. Dashed lines show the expected correct visual (light gray) and body (dark gray) reproductions. Error bars show the standard errors across subjects
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2800859&req=5

Fig5: The reproduced amplitudes for visual only reproduction (V to V and VB to V conditions) and body only reproduction (B to B and VB to B conditions), as a function of the presented turn angle. Dashed lines show the expected correct visual (light gray) and body (dark gray) reproductions. Error bars show the standard errors across subjects
Mentions: In order to see whether the two sensory cues interact at the encoding stage, we looked at the recalling of turns in each sensory context (visual or body) and compared the performance for unimodal and bimodal turn presentations. Figure 5 shows the reproduced turn amplitude for visual reproduction and body reproduction as a function of the presented turn angle. For visual reproduction of turns, there was no significant main effect of the encoding sensory context (V to V vs. VB to V, F(1,11) = 0.79; p = 0.39) and interaction with the turn angle (F(2,22) = 0.25; p = 0.78). Similarly, for bodily reproduction of turns, there was no significant main effect of the encoding sensory context (B to B vs. VB to B, F(1,11) = 0.67; p = 0.43) and interaction with the turn angle (F(2,22) = 1.41; p = 0.26). The reproduction dynamics did not differ according to the presentation sensory context: there was no significant difference at all between bimodal and unimodal encoding for both maximum angular velocities and motion durations (see Fig. 4).Fig. 5

Bottom Line: First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles).Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions.Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.

View Article: PubMed Central - PubMed

Affiliation: Max Planck Institute for Biological Cybernetics, Tübingen, Germany. manuel.vidal@college-de-france.fr

ABSTRACT
Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.

Show MeSH