Limits...
Neural representation of spectral and temporal features of song in the auditory forebrain of zebra finches as revealed by functional MRI.

Boumans T, Theunissen FE, Poirier C, Van Der Linden A - Eur. J. Neurosci. (2007)

Bottom Line: Song perception in songbirds, just as music and speech perception in humans, requires processing the spectral and temporal structure found in the succession of song-syllables.We did not find any differences in responses to playback of the bird's own song vs other familiar conspecific songs.We discuss these results in the context of what is known about the locus of action of the anaesthetics, and reports of neural activity measured in electrophysiological experiments.

View Article: PubMed Central - PubMed

Affiliation: Bio-Imaging Laboratory, University of Antwerp, Belgium.

ABSTRACT
Song perception in songbirds, just as music and speech perception in humans, requires processing the spectral and temporal structure found in the succession of song-syllables. Using functional magnetic resonance imaging and synthetic songs that preserved exclusively either the temporal or the spectral structure of natural song, we investigated how vocalizations are processed in the avian forebrain. We found bilateral and equal activation of the primary auditory region, field L. The more ventral regions of field L showed depressed responses to the synthetic songs that lacked spectral structure. These ventral regions included subarea L3, medial-ventral subarea L and potentially the secondary auditory region caudal medial nidopallium. In addition, field L as a whole showed unexpected increased responses to the temporally filtered songs and this increase was the largest in the dorsal regions. These dorsal regions included L1 and the dorsal subareas L and L2b. Therefore, the ventral region of field L appears to be more sensitive to the preservation of both spectral and temporal information in the context of song processing. We did not find any differences in responses to playback of the bird's own song vs other familiar conspecific songs. We also investigated the effect of three commonly used anaesthetics on the blood oxygen level-dependent response: medetomidine, urethane and isoflurane. The extent of the area activated and the stimulus selectivity depended on the type of anaesthetic. We discuss these results in the context of what is known about the locus of action of the anaesthetics, and reports of neural activity measured in electrophysiological experiments.

Show MeSH
Oscillograms (top row), spectrograms (middle row) and modulation spectra (bottom row) showing the spectral and temporal features found in the experimental auditory stimuli. The figure shows an example of conspecific song before (CON) and after spectral (CON-sf) and temporal filtering (CON-tf). The modulation spectrum (see online edition for colour figure) quantifies the spectrotemporal structure that is present in the sound (see Material and methods). ωx = spectral modulations, ωt = temporal modulations.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2228391&req=5

fig01: Oscillograms (top row), spectrograms (middle row) and modulation spectra (bottom row) showing the spectral and temporal features found in the experimental auditory stimuli. The figure shows an example of conspecific song before (CON) and after spectral (CON-sf) and temporal filtering (CON-tf). The modulation spectrum (see online edition for colour figure) quantifies the spectrotemporal structure that is present in the sound (see Material and methods). ωx = spectral modulations, ωt = temporal modulations.

Mentions: The synthetic songs were obtained by low-pass filtering the natural songs' temporal or spectral modulations. This filtering operation was performed in the space of the modulation spectrum and should not be confused with more typical frequency filtering operations. The modulation spectrum is obtained by performing a 2D power spectrum of a time–frequency representation of the sound, in our case the log of the spectrogram (Singh & Theunissen, 2003). The modulation spectrum of a particular song is shown in Fig. 1 (bottom row, left panel). The x-axis represents the temporal amplitude modulation frequencies in the narrowband signals obtained by a decomposition of the sound in different frequency bands as performed in a spectrogram. The y-axis represents the spectral modulations of the same amplitude envelopes but across frequency bands, in units of cycles/kHz. The colour of the modulation spectrum (see online edition) codes the energy of modulations − as a function of joint temporal modulations and spectral modulations. The logarithm of the modulation spectrum is used to disentangle multiplicative spectral or temporal modulations into separate terms. For example, in speech sounds, the spectral modulations that constitute the formants in vowels (timbre) separate from those that constitute the pitch of the voice. For natural sounds, the modulation spectrum follows a power law relationship with most energy concentrated at the low frequencies. For zebra finch (as well as other animal) vocalizations, most of the temporal modulations in the envelope are found below 25 Hz. Most of the energy in the spectral modulations of animal vocalizations is found below 2.5 cycles/kHz (Singh & Theunissen, 2003; Cohen et al., 2006). Finally, as mentioned in the Introduction, the modulation spectrum of animal vocalizations including human speech shows a degree of inseparability: vocalizations are made of short sounds with little spectral structure but fast temporal changes (found along the x-axis at intermediate to high temporal frequencies), and slow sounds with rich spectral structure (found along the y-axis at intermediate to high spectral frequencies).


Neural representation of spectral and temporal features of song in the auditory forebrain of zebra finches as revealed by functional MRI.

Boumans T, Theunissen FE, Poirier C, Van Der Linden A - Eur. J. Neurosci. (2007)

Oscillograms (top row), spectrograms (middle row) and modulation spectra (bottom row) showing the spectral and temporal features found in the experimental auditory stimuli. The figure shows an example of conspecific song before (CON) and after spectral (CON-sf) and temporal filtering (CON-tf). The modulation spectrum (see online edition for colour figure) quantifies the spectrotemporal structure that is present in the sound (see Material and methods). ωx = spectral modulations, ωt = temporal modulations.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2228391&req=5

fig01: Oscillograms (top row), spectrograms (middle row) and modulation spectra (bottom row) showing the spectral and temporal features found in the experimental auditory stimuli. The figure shows an example of conspecific song before (CON) and after spectral (CON-sf) and temporal filtering (CON-tf). The modulation spectrum (see online edition for colour figure) quantifies the spectrotemporal structure that is present in the sound (see Material and methods). ωx = spectral modulations, ωt = temporal modulations.
Mentions: The synthetic songs were obtained by low-pass filtering the natural songs' temporal or spectral modulations. This filtering operation was performed in the space of the modulation spectrum and should not be confused with more typical frequency filtering operations. The modulation spectrum is obtained by performing a 2D power spectrum of a time–frequency representation of the sound, in our case the log of the spectrogram (Singh & Theunissen, 2003). The modulation spectrum of a particular song is shown in Fig. 1 (bottom row, left panel). The x-axis represents the temporal amplitude modulation frequencies in the narrowband signals obtained by a decomposition of the sound in different frequency bands as performed in a spectrogram. The y-axis represents the spectral modulations of the same amplitude envelopes but across frequency bands, in units of cycles/kHz. The colour of the modulation spectrum (see online edition) codes the energy of modulations − as a function of joint temporal modulations and spectral modulations. The logarithm of the modulation spectrum is used to disentangle multiplicative spectral or temporal modulations into separate terms. For example, in speech sounds, the spectral modulations that constitute the formants in vowels (timbre) separate from those that constitute the pitch of the voice. For natural sounds, the modulation spectrum follows a power law relationship with most energy concentrated at the low frequencies. For zebra finch (as well as other animal) vocalizations, most of the temporal modulations in the envelope are found below 25 Hz. Most of the energy in the spectral modulations of animal vocalizations is found below 2.5 cycles/kHz (Singh & Theunissen, 2003; Cohen et al., 2006). Finally, as mentioned in the Introduction, the modulation spectrum of animal vocalizations including human speech shows a degree of inseparability: vocalizations are made of short sounds with little spectral structure but fast temporal changes (found along the x-axis at intermediate to high temporal frequencies), and slow sounds with rich spectral structure (found along the y-axis at intermediate to high spectral frequencies).

Bottom Line: Song perception in songbirds, just as music and speech perception in humans, requires processing the spectral and temporal structure found in the succession of song-syllables.We did not find any differences in responses to playback of the bird's own song vs other familiar conspecific songs.We discuss these results in the context of what is known about the locus of action of the anaesthetics, and reports of neural activity measured in electrophysiological experiments.

View Article: PubMed Central - PubMed

Affiliation: Bio-Imaging Laboratory, University of Antwerp, Belgium.

ABSTRACT
Song perception in songbirds, just as music and speech perception in humans, requires processing the spectral and temporal structure found in the succession of song-syllables. Using functional magnetic resonance imaging and synthetic songs that preserved exclusively either the temporal or the spectral structure of natural song, we investigated how vocalizations are processed in the avian forebrain. We found bilateral and equal activation of the primary auditory region, field L. The more ventral regions of field L showed depressed responses to the synthetic songs that lacked spectral structure. These ventral regions included subarea L3, medial-ventral subarea L and potentially the secondary auditory region caudal medial nidopallium. In addition, field L as a whole showed unexpected increased responses to the temporally filtered songs and this increase was the largest in the dorsal regions. These dorsal regions included L1 and the dorsal subareas L and L2b. Therefore, the ventral region of field L appears to be more sensitive to the preservation of both spectral and temporal information in the context of song processing. We did not find any differences in responses to playback of the bird's own song vs other familiar conspecific songs. We also investigated the effect of three commonly used anaesthetics on the blood oxygen level-dependent response: medetomidine, urethane and isoflurane. The extent of the area activated and the stimulus selectivity depended on the type of anaesthetic. We discuss these results in the context of what is known about the locus of action of the anaesthetics, and reports of neural activity measured in electrophysiological experiments.

Show MeSH