Limits...
A neural network model can explain ventriloquism aftereffect and its generalization across sound frequencies.

Magosso E, Cona F, Ursino M - Biomed Res Int (2013)

Bottom Line: Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect).The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature.Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.

View Article: PubMed Central - PubMed

Affiliation: Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Via Venezia 52, 47521 Cesena, Italy.

ABSTRACT
Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.

Show MeSH

Related in: MedlinePlus

Network response to unimodal stimuli. Left panels ((a) and (c)) show the response of the visual and auditory layers to a visual stimulus at 100° with E0v = 15. Right panels ((b) and (d)) show the response of the visual and auditory layers to an auditory stimulus at 80° and 1.1 kHz with E0a = 20. The two insets in (d) display the response profiles along the azimuth at frequency 1.1 kHz (bottom inset) and along the frequency at azimuth 80° (right inset).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3818813&req=5

fig2: Network response to unimodal stimuli. Left panels ((a) and (c)) show the response of the visual and auditory layers to a visual stimulus at 100° with E0v = 15. Right panels ((b) and (d)) show the response of the visual and auditory layers to an auditory stimulus at 80° and 1.1 kHz with E0a = 20. The two insets in (d) display the response profiles along the azimuth at frequency 1.1 kHz (bottom inset) and along the frequency at azimuth 80° (right inset).

Mentions: These aspects are summarized in Figures 2(a) and 2(c). They show the network response in steady-state condition (i.e., after the transient was exhausted) to a unimodal visual stimulus applied at position 100° and with intensity E0v = 15. The stimulus was maintained throughout the overall simulation. Activation of the visual neurons assumes high values (close to saturation level) only nearby the position of the stimulus and declines sharply to zero as moving away from it, thus signaling a well-localizable visual stimulus. No activation is produced in the nonstimulated auditory layer.


A neural network model can explain ventriloquism aftereffect and its generalization across sound frequencies.

Magosso E, Cona F, Ursino M - Biomed Res Int (2013)

Network response to unimodal stimuli. Left panels ((a) and (c)) show the response of the visual and auditory layers to a visual stimulus at 100° with E0v = 15. Right panels ((b) and (d)) show the response of the visual and auditory layers to an auditory stimulus at 80° and 1.1 kHz with E0a = 20. The two insets in (d) display the response profiles along the azimuth at frequency 1.1 kHz (bottom inset) and along the frequency at azimuth 80° (right inset).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3818813&req=5

fig2: Network response to unimodal stimuli. Left panels ((a) and (c)) show the response of the visual and auditory layers to a visual stimulus at 100° with E0v = 15. Right panels ((b) and (d)) show the response of the visual and auditory layers to an auditory stimulus at 80° and 1.1 kHz with E0a = 20. The two insets in (d) display the response profiles along the azimuth at frequency 1.1 kHz (bottom inset) and along the frequency at azimuth 80° (right inset).
Mentions: These aspects are summarized in Figures 2(a) and 2(c). They show the network response in steady-state condition (i.e., after the transient was exhausted) to a unimodal visual stimulus applied at position 100° and with intensity E0v = 15. The stimulus was maintained throughout the overall simulation. Activation of the visual neurons assumes high values (close to saturation level) only nearby the position of the stimulus and declines sharply to zero as moving away from it, thus signaling a well-localizable visual stimulus. No activation is produced in the nonstimulated auditory layer.

Bottom Line: Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect).The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature.Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.

View Article: PubMed Central - PubMed

Affiliation: Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Via Venezia 52, 47521 Cesena, Italy.

ABSTRACT
Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.

Show MeSH
Related in: MedlinePlus