Limits...
Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

Eberhardt SP, Auer ET, Bernstein LE - Front Hum Neurosci (2014)

Bottom Line: Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training.Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training.Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway.

View Article: PubMed Central - PubMed

Affiliation: Communication Neuroscience Laboratory, Department of Speech and Hearing Sciences, George Washington University Washington, DC, USA.

ABSTRACT
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

No MeSH data available.


Pre- and post-training consonant identification scores for each condition of training and for controls, and for each position (initial, medial, final) in the CVCVC stimuli. These figures show the results averaged across all participants within each group. (A) Mean proportion phoneme equivalence classes (PECs) correct. (B) Mean proportion consonants correct. Note: Scales are different in (A) and (B) reflecting the more liberal scoring in (A). Error bars are standard error of the mean.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4215828&req=5

Figure 4: Pre- and post-training consonant identification scores for each condition of training and for controls, and for each position (initial, medial, final) in the CVCVC stimuli. These figures show the results averaged across all participants within each group. (A) Mean proportion phoneme equivalence classes (PECs) correct. (B) Mean proportion consonants correct. Note: Scales are different in (A) and (B) reflecting the more liberal scoring in (A). Error bars are standard error of the mean.

Mentions: Table 2 and Figure 4 gives pre- and post-training consonant identification mean scores for each of the consonant positions in terms of proportion consonants correct and proportion phoneme equivalence classes correct. The VO pre- and post-training consonant identification scores were submitted to a within-subjects analysis for position (initial, medial, or final in CVCVC stimuli) and test time (pre-, post-training). Lipreading screening scores were used as a covariate, because they correlated with the consonant identification scores. They were a reliable covariate, F(1,18) = 11.529, p = 0.003, = 0.390. Position was a reliable factor, F(2,17) = 39.832, p = 0.000, = 0.824, but so was its interaction with test time and the covariate, F(2,17) = 5.152, p = 0.018, = 0.377. In simple comparisons, the interaction was isolated to the difference across time for the medial vs. final consonant positions F(1,18) = 9.676, p = 0.006, = 0.350 (See Table 2 for all the consonant identification mean scores in each experiment, time period, and scoring approach).


Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

Eberhardt SP, Auer ET, Bernstein LE - Front Hum Neurosci (2014)

Pre- and post-training consonant identification scores for each condition of training and for controls, and for each position (initial, medial, final) in the CVCVC stimuli. These figures show the results averaged across all participants within each group. (A) Mean proportion phoneme equivalence classes (PECs) correct. (B) Mean proportion consonants correct. Note: Scales are different in (A) and (B) reflecting the more liberal scoring in (A). Error bars are standard error of the mean.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4215828&req=5

Figure 4: Pre- and post-training consonant identification scores for each condition of training and for controls, and for each position (initial, medial, final) in the CVCVC stimuli. These figures show the results averaged across all participants within each group. (A) Mean proportion phoneme equivalence classes (PECs) correct. (B) Mean proportion consonants correct. Note: Scales are different in (A) and (B) reflecting the more liberal scoring in (A). Error bars are standard error of the mean.
Mentions: Table 2 and Figure 4 gives pre- and post-training consonant identification mean scores for each of the consonant positions in terms of proportion consonants correct and proportion phoneme equivalence classes correct. The VO pre- and post-training consonant identification scores were submitted to a within-subjects analysis for position (initial, medial, or final in CVCVC stimuli) and test time (pre-, post-training). Lipreading screening scores were used as a covariate, because they correlated with the consonant identification scores. They were a reliable covariate, F(1,18) = 11.529, p = 0.003, = 0.390. Position was a reliable factor, F(2,17) = 39.832, p = 0.000, = 0.824, but so was its interaction with test time and the covariate, F(2,17) = 5.152, p = 0.018, = 0.377. In simple comparisons, the interaction was isolated to the difference across time for the medial vs. final consonant positions F(1,18) = 9.676, p = 0.006, = 0.350 (See Table 2 for all the consonant identification mean scores in each experiment, time period, and scoring approach).

Bottom Line: Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training.Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training.Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway.

View Article: PubMed Central - PubMed

Affiliation: Communication Neuroscience Laboratory, Department of Speech and Hearing Sciences, George Washington University Washington, DC, USA.

ABSTRACT
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

No MeSH data available.