Limits...
The role of inhibition in a computational model of an auditory cortical neuron during the encoding of temporal information.

Bendor D - PLoS Comput. Biol. (2015)

Bottom Line: Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition.In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition.Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex.

View Article: PubMed Central - PubMed

Affiliation: Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, United Kingdom.

ABSTRACT
In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex.

No MeSH data available.


Related in: MedlinePlus

Schematic of synchronized and non-synchronized responses from auditory cortical neurons in response to acoustic pulse trains generating flutter and fusion percepts.Each plot is subdivided (from top to bottom) into an illustration of the acoustic pulse train (gray), and the evoked neural response from synchronized neurons (red) and non-synchronized neurons (blue). The inset plot in (a) shows a single acoustic pulse (5 kHz carrier frequency). a. An acoustic pulse train generating a flutter percept (interpulse interval = 50 ms). b. An acoustic pulse train generating a fusion percept (interpulse interval = 10 ms).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4400160&req=5

pcbi.1004197.g001: Schematic of synchronized and non-synchronized responses from auditory cortical neurons in response to acoustic pulse trains generating flutter and fusion percepts.Each plot is subdivided (from top to bottom) into an illustration of the acoustic pulse train (gray), and the evoked neural response from synchronized neurons (red) and non-synchronized neurons (blue). The inset plot in (a) shows a single acoustic pulse (5 kHz carrier frequency). a. An acoustic pulse train generating a flutter percept (interpulse interval = 50 ms). b. An acoustic pulse train generating a fusion percept (interpulse interval = 10 ms).

Mentions: The dichotomous categorization of a sequence of brief sounds into the perceptions of flutter and fusion is reflected in the corresponding neural representations of these sounds. Within auditory cortex, a sequence of brief sounds, hereinafter referred to as an acoustic pulse train, is encoded with either a temporal or rate representation, for longer and shorter interpulse intervals (IPIs), respectively [14–17]. A temporal representation is provided by neurons with envelope-locked responses, referred to as “synchronized neurons”, reflecting their ability to synchronize their spikes to each acoustic pulse (Fig. 1a). However, the temporal fidelity of this synchronization degrades at shorter IPIs, with an encoding boundary near the flutter/fusion perceptual boundary. In the perceptual range of fusion, synchronized neurons generally elicit only an onset response, and thus cannot be used to discriminate between these shorter IPIs (Fig. 1b). In addition to synchronized responses, neurons can also produce “non-synchronized” responses to acoustic pulse trains [14,17–19]. Non-synchronized neurons increase their firing rate monotonically with decreasing IPIs over the perceptual range of fusion without exhibiting envelope-locked responses (Fig. 1b). While non-synchronized neurons are generally unresponsive at IPIs in the range of flutter (Fig. 1a), the combined neural representations from synchronized and non-synchronized neurons are sufficient to encode temporal information across a wide range of IPIs, spanning the percepts of both flutter and fusion.


The role of inhibition in a computational model of an auditory cortical neuron during the encoding of temporal information.

Bendor D - PLoS Comput. Biol. (2015)

Schematic of synchronized and non-synchronized responses from auditory cortical neurons in response to acoustic pulse trains generating flutter and fusion percepts.Each plot is subdivided (from top to bottom) into an illustration of the acoustic pulse train (gray), and the evoked neural response from synchronized neurons (red) and non-synchronized neurons (blue). The inset plot in (a) shows a single acoustic pulse (5 kHz carrier frequency). a. An acoustic pulse train generating a flutter percept (interpulse interval = 50 ms). b. An acoustic pulse train generating a fusion percept (interpulse interval = 10 ms).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4400160&req=5

pcbi.1004197.g001: Schematic of synchronized and non-synchronized responses from auditory cortical neurons in response to acoustic pulse trains generating flutter and fusion percepts.Each plot is subdivided (from top to bottom) into an illustration of the acoustic pulse train (gray), and the evoked neural response from synchronized neurons (red) and non-synchronized neurons (blue). The inset plot in (a) shows a single acoustic pulse (5 kHz carrier frequency). a. An acoustic pulse train generating a flutter percept (interpulse interval = 50 ms). b. An acoustic pulse train generating a fusion percept (interpulse interval = 10 ms).
Mentions: The dichotomous categorization of a sequence of brief sounds into the perceptions of flutter and fusion is reflected in the corresponding neural representations of these sounds. Within auditory cortex, a sequence of brief sounds, hereinafter referred to as an acoustic pulse train, is encoded with either a temporal or rate representation, for longer and shorter interpulse intervals (IPIs), respectively [14–17]. A temporal representation is provided by neurons with envelope-locked responses, referred to as “synchronized neurons”, reflecting their ability to synchronize their spikes to each acoustic pulse (Fig. 1a). However, the temporal fidelity of this synchronization degrades at shorter IPIs, with an encoding boundary near the flutter/fusion perceptual boundary. In the perceptual range of fusion, synchronized neurons generally elicit only an onset response, and thus cannot be used to discriminate between these shorter IPIs (Fig. 1b). In addition to synchronized responses, neurons can also produce “non-synchronized” responses to acoustic pulse trains [14,17–19]. Non-synchronized neurons increase their firing rate monotonically with decreasing IPIs over the perceptual range of fusion without exhibiting envelope-locked responses (Fig. 1b). While non-synchronized neurons are generally unresponsive at IPIs in the range of flutter (Fig. 1a), the combined neural representations from synchronized and non-synchronized neurons are sufficient to encode temporal information across a wide range of IPIs, spanning the percepts of both flutter and fusion.

Bottom Line: Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition.In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition.Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex.

View Article: PubMed Central - PubMed

Affiliation: Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, United Kingdom.

ABSTRACT
In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex.

No MeSH data available.


Related in: MedlinePlus