Limits...
A neural network model can explain ventriloquism aftereffect and its generalization across sound frequencies.

Magosso E, Cona F, Ursino M - Biomed Res Int (2013)

Bottom Line: Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect).The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature.Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.

View Article: PubMed Central - PubMed

Affiliation: Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Via Venezia 52, 47521 Cesena, Italy.

ABSTRACT
Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.

Show MeSH

Related in: MedlinePlus

Sensitivity analysis on the tuning functions of auditory neurons. (a) shows the azimuthal tuning function of a generic auditory neuron for different intensities of the auditory stimulus. (b) shows the frequency tuning function of a generic auditory neuron for different intensities of the auditory stimulus (along the y axis); the map is normalized with respect to the peak activation.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3818813&req=5

fig3: Sensitivity analysis on the tuning functions of auditory neurons. (a) shows the azimuthal tuning function of a generic auditory neuron for different intensities of the auditory stimulus. (b) shows the frequency tuning function of a generic auditory neuron for different intensities of the auditory stimulus (along the y axis); the map is normalized with respect to the peak activation.

Mentions: Figure 3(a) displays the results as to the azimuthal tuning function. As the stimulus intensity increased, the shape of the function was maintained, while its peak clearly grew. The width of the function—evaluated as the half-maximum width (i.e., the azimuthal range from the maximum response to one-half the maximum response)—slightly increased, expanding by ≈ 8° when stimulus intensity shifts from 10 to 25. These model results are in agreement with data found in some neurophysiological works: an increase in the width and in the peak of the azimuthal response, with increasing sound intensity, was observed in certain populations of auditory neurons both in the primary auditory cortex (see e.g., Figures 1, 4, and 5 in [20]) and in the caudomedial field (see e.g., Figures 4 and 12 in [34]).


A neural network model can explain ventriloquism aftereffect and its generalization across sound frequencies.

Magosso E, Cona F, Ursino M - Biomed Res Int (2013)

Sensitivity analysis on the tuning functions of auditory neurons. (a) shows the azimuthal tuning function of a generic auditory neuron for different intensities of the auditory stimulus. (b) shows the frequency tuning function of a generic auditory neuron for different intensities of the auditory stimulus (along the y axis); the map is normalized with respect to the peak activation.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3818813&req=5

fig3: Sensitivity analysis on the tuning functions of auditory neurons. (a) shows the azimuthal tuning function of a generic auditory neuron for different intensities of the auditory stimulus. (b) shows the frequency tuning function of a generic auditory neuron for different intensities of the auditory stimulus (along the y axis); the map is normalized with respect to the peak activation.
Mentions: Figure 3(a) displays the results as to the azimuthal tuning function. As the stimulus intensity increased, the shape of the function was maintained, while its peak clearly grew. The width of the function—evaluated as the half-maximum width (i.e., the azimuthal range from the maximum response to one-half the maximum response)—slightly increased, expanding by ≈ 8° when stimulus intensity shifts from 10 to 25. These model results are in agreement with data found in some neurophysiological works: an increase in the width and in the peak of the azimuthal response, with increasing sound intensity, was observed in certain populations of auditory neurons both in the primary auditory cortex (see e.g., Figures 1, 4, and 5 in [20]) and in the caudomedial field (see e.g., Figures 4 and 12 in [34]).

Bottom Line: Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect).The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature.Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.

View Article: PubMed Central - PubMed

Affiliation: Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Via Venezia 52, 47521 Cesena, Italy.

ABSTRACT
Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.

Show MeSH
Related in: MedlinePlus