Limits...
Benefits of stimulus congruency for multisensory facilitation of visual learning.

Kim RS, Seitz AR, Shams L - PLoS ONE (2008)

Bottom Line: However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning.Here, we examine the effect of auditory-visual congruency on visual learning.Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, University of California Los Angeles, Los Angeles, California, USA.

ABSTRACT

Background: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/principle findings: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/significance: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.

Show MeSH
Data from each training session for congruent audiovisual group (green), unisensory visual group (red), and incongruent audiovisual group (blue).Ordinate is proportion correct averaged across three signal levels, abscissa reflects training session number. Solid lines reflect performance on visual-only trials over the first third of each session; dashed lines represent performance on audiovisual trials over the first third of each session. Error bars reflect within-group standard error [19].
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2211398&req=5

pone-0001532-g002: Data from each training session for congruent audiovisual group (green), unisensory visual group (red), and incongruent audiovisual group (blue).Ordinate is proportion correct averaged across three signal levels, abscissa reflects training session number. Solid lines reflect performance on visual-only trials over the first third of each session; dashed lines represent performance on audiovisual trials over the first third of each session. Error bars reflect within-group standard error [19].

Mentions: In Figure 2, we show performance for the congruent audiovisual trained group (green), incongruent audiovisual trained group (blue), and the unisensory trained group (red), for visual-only trials (solid lines) and audiovisual trials (dashed lines) across the five days of training. While it is evident that there is a tendency for improvement in each group, improvement is clearly greatest for the congruent group. While the change in performance across the five days was highly significant for the congruent group (F(4,24) = 14.158, p<.0001, one-way repeated-measure ANOVA), the performance change was only moderately significant for the unisensory group (F(4,24) = 2.938, p = .04) and marginally significant for the incongruent group (F(4,24) = 2.937, p = .053). Furthermore a 3-way ANOVA (Training Day×Training Condition×Stimulus Level) shows a significant effect of training day (F(1,18) = 62.761, p<.01) and stimulus level (F(1,18) = 77.506, p<.01), and an interaction between training day and training condition between the first and last day of training (F(2,18) = 3.702, p<.05). However, we found no interaction between training day and stimulus level (F(2,26) = 1.144, p = .3299); therefore, we collapse data across stimulus levels.


Benefits of stimulus congruency for multisensory facilitation of visual learning.

Kim RS, Seitz AR, Shams L - PLoS ONE (2008)

Data from each training session for congruent audiovisual group (green), unisensory visual group (red), and incongruent audiovisual group (blue).Ordinate is proportion correct averaged across three signal levels, abscissa reflects training session number. Solid lines reflect performance on visual-only trials over the first third of each session; dashed lines represent performance on audiovisual trials over the first third of each session. Error bars reflect within-group standard error [19].
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2211398&req=5

pone-0001532-g002: Data from each training session for congruent audiovisual group (green), unisensory visual group (red), and incongruent audiovisual group (blue).Ordinate is proportion correct averaged across three signal levels, abscissa reflects training session number. Solid lines reflect performance on visual-only trials over the first third of each session; dashed lines represent performance on audiovisual trials over the first third of each session. Error bars reflect within-group standard error [19].
Mentions: In Figure 2, we show performance for the congruent audiovisual trained group (green), incongruent audiovisual trained group (blue), and the unisensory trained group (red), for visual-only trials (solid lines) and audiovisual trials (dashed lines) across the five days of training. While it is evident that there is a tendency for improvement in each group, improvement is clearly greatest for the congruent group. While the change in performance across the five days was highly significant for the congruent group (F(4,24) = 14.158, p<.0001, one-way repeated-measure ANOVA), the performance change was only moderately significant for the unisensory group (F(4,24) = 2.938, p = .04) and marginally significant for the incongruent group (F(4,24) = 2.937, p = .053). Furthermore a 3-way ANOVA (Training Day×Training Condition×Stimulus Level) shows a significant effect of training day (F(1,18) = 62.761, p<.01) and stimulus level (F(1,18) = 77.506, p<.01), and an interaction between training day and training condition between the first and last day of training (F(2,18) = 3.702, p<.05). However, we found no interaction between training day and stimulus level (F(2,26) = 1.144, p = .3299); therefore, we collapse data across stimulus levels.

Bottom Line: However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning.Here, we examine the effect of auditory-visual congruency on visual learning.Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, University of California Los Angeles, Los Angeles, California, USA.

ABSTRACT

Background: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/principle findings: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/significance: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.

Show MeSH