Limits...
The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

Młynarski W - PLoS Comput. Biol. (2015)

Bottom Line: Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex.This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions.Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

View Article: PubMed Central - PubMed

Affiliation: Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany. wiktor.mlynarski@gmail.com

ABSTRACT
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

No MeSH data available.


Related in: MedlinePlus

The graphical model representing variable dependencies.The lowest layer represents sound epochs perceived by the left and the right ear xL and xR. They are decomposed by a sparse coding algorithm into phase and amplitude vectors ϕL, ϕR and aL, aR. Phases are further substracted from each other in order to obtain an IPD vector Δϕ. The second layer encodes jointly monaural amplitudes and IPDs. Auxiliary variables (phase offset and the scaling factor w) are depicted in gray.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4440638&req=5

pcbi.1004294.g001: The graphical model representing variable dependencies.The lowest layer represents sound epochs perceived by the left and the right ear xL and xR. They are decomposed by a sparse coding algorithm into phase and amplitude vectors ϕL, ϕR and aL, aR. Phases are further substracted from each other in order to obtain an IPD vector Δϕ. The second layer encodes jointly monaural amplitudes and IPDs. Auxiliary variables (phase offset and the scaling factor w) are depicted in gray.

Mentions: The present study proposes a hierarchical statistical model of binaural sounds, which captures binaural and spectrotemporal structure present in natural stimuli. The architecture of the model is shown in Fig 1. It consists of the input layer and two hidden layers. The input to the model was N sample-long epochs of binaural sound: from the left ear—xL and from the right ear—xR. The role of the first layer was to extract and separate phase and amplitude information from each ear by encoding them in an efficient manner. Monaural sounds were transformed into phase (ϕL, ϕR) and amplitude (aL, aR) vectors. This layer can be thought of as a statistical analogy to cochlear filtering. Phase vectors were further modified by computing interaural phase differences (IPDs) — a major sound localization cue [32]. This tranformation may be considered an attempt to mimic functioning of the medial superior olive (MSO) — the brainstem nucleus where phase differences are extracted [32].


The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

Młynarski W - PLoS Comput. Biol. (2015)

The graphical model representing variable dependencies.The lowest layer represents sound epochs perceived by the left and the right ear xL and xR. They are decomposed by a sparse coding algorithm into phase and amplitude vectors ϕL, ϕR and aL, aR. Phases are further substracted from each other in order to obtain an IPD vector Δϕ. The second layer encodes jointly monaural amplitudes and IPDs. Auxiliary variables (phase offset and the scaling factor w) are depicted in gray.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4440638&req=5

pcbi.1004294.g001: The graphical model representing variable dependencies.The lowest layer represents sound epochs perceived by the left and the right ear xL and xR. They are decomposed by a sparse coding algorithm into phase and amplitude vectors ϕL, ϕR and aL, aR. Phases are further substracted from each other in order to obtain an IPD vector Δϕ. The second layer encodes jointly monaural amplitudes and IPDs. Auxiliary variables (phase offset and the scaling factor w) are depicted in gray.
Mentions: The present study proposes a hierarchical statistical model of binaural sounds, which captures binaural and spectrotemporal structure present in natural stimuli. The architecture of the model is shown in Fig 1. It consists of the input layer and two hidden layers. The input to the model was N sample-long epochs of binaural sound: from the left ear—xL and from the right ear—xR. The role of the first layer was to extract and separate phase and amplitude information from each ear by encoding them in an efficient manner. Monaural sounds were transformed into phase (ϕL, ϕR) and amplitude (aL, aR) vectors. This layer can be thought of as a statistical analogy to cochlear filtering. Phase vectors were further modified by computing interaural phase differences (IPDs) — a major sound localization cue [32]. This tranformation may be considered an attempt to mimic functioning of the medial superior olive (MSO) — the brainstem nucleus where phase differences are extracted [32].

Bottom Line: Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex.This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions.Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

View Article: PubMed Central - PubMed

Affiliation: Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany. wiktor.mlynarski@gmail.com

ABSTRACT
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

No MeSH data available.


Related in: MedlinePlus