Limits...
Contributions of electric and acoustic hearing to bimodal speech and music perception.

Crew JD, Galvin JJ, Landsberger DM, Fu QJ - PLoS ONE (2015)

Bottom Line: In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only).Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent.The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.

View Article: PubMed Central - PubMed

Affiliation: Department of Biomedical Engineering, University of Southern California, Los Angeles, California, United States of America.

ABSTRACT
Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.

No MeSH data available.


Related in: MedlinePlus

Spectrograms and electrodograms for the No Masker condition for 1- and 3-semitone spacings.The far left panel shows a schematic representation of HA and CI frequency ranges. The target contour is shown in black. The middle two panels show a spectral representation of the original stimuli (left) and simulated HA output (right). A steeply sloping hearing loss was simulated using AngelSim and is intended for illustrative purposes only. The far right panel shows an idealized electrodogram representing the electrical stimulation patterns for a CI. Electrodograms were simulated using default stimulation parameters for the Cochlear Freedom and Nucleus-24 devices: 900 Hz/channel stimulation rate, 8 maxima, frequency allocation Table 6, etc.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4366155&req=5

pone.0120279.g002: Spectrograms and electrodograms for the No Masker condition for 1- and 3-semitone spacings.The far left panel shows a schematic representation of HA and CI frequency ranges. The target contour is shown in black. The middle two panels show a spectral representation of the original stimuli (left) and simulated HA output (right). A steeply sloping hearing loss was simulated using AngelSim and is intended for illustrative purposes only. The far right panel shows an idealized electrodogram representing the electrical stimulation patterns for a CI. Electrodograms were simulated using default stimulation parameters for the Cochlear Freedom and Nucleus-24 devices: 900 Hz/channel stimulation rate, 8 maxima, frequency allocation Table 6, etc.

Mentions: Fig. 2 shows the “rising” contour with 1-semitone (top row) or 3-semitone (bottom row) spacing. The far left side of Fig. 2 illustrates the different contours within the HA and CI frequency ranges. The original spectrogram of the contours is shown just to the right; differences in the extent of F0 range can be seen between the 1- and 3-semitone spacing conditions. Next right is a spectrogram of the contours processed by a hearing loss simulation (AngelSim from www.tigerspeech.com). A steeply sloping hearing loss was simulated (0 db HL at 125 Hz, 20 dB HL at 250 Hz, 60 db HL at 500 Hz, 60 dB HL at 1000 Hz, 100 dB HL at 2000 Hz, 120 dB HL at 4000 Hz, and 120 dB HL at 8000 Hz) for illustrative purposes only, and was not intended to represent any subject’s audiogram. Differences in high frequency harmonic information can be easily seen between the original and HA spectrograms. The far right of Fig. 2 shows electrodograms that represent the electrical stimulation patterns given the default stimulation parameters for the Cochlear Freedom and Nucleus 24 devices which employs an 8 of 22 channel selection strategy. [The electrograms for subjects S1, S6, S7, and S8 would be slightly different as they use an Advanced Bionics device (16 channels; no channel selection).] The y-axis represents electrodes from the apex (bottom) to the base (top). Differences in the stimulation pattern across notes can be seen with the 3-semitone spacing (bottom); with the 1-semitone spacing (top), the changes in the stimulation pattern are more subtle.


Contributions of electric and acoustic hearing to bimodal speech and music perception.

Crew JD, Galvin JJ, Landsberger DM, Fu QJ - PLoS ONE (2015)

Spectrograms and electrodograms for the No Masker condition for 1- and 3-semitone spacings.The far left panel shows a schematic representation of HA and CI frequency ranges. The target contour is shown in black. The middle two panels show a spectral representation of the original stimuli (left) and simulated HA output (right). A steeply sloping hearing loss was simulated using AngelSim and is intended for illustrative purposes only. The far right panel shows an idealized electrodogram representing the electrical stimulation patterns for a CI. Electrodograms were simulated using default stimulation parameters for the Cochlear Freedom and Nucleus-24 devices: 900 Hz/channel stimulation rate, 8 maxima, frequency allocation Table 6, etc.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4366155&req=5

pone.0120279.g002: Spectrograms and electrodograms for the No Masker condition for 1- and 3-semitone spacings.The far left panel shows a schematic representation of HA and CI frequency ranges. The target contour is shown in black. The middle two panels show a spectral representation of the original stimuli (left) and simulated HA output (right). A steeply sloping hearing loss was simulated using AngelSim and is intended for illustrative purposes only. The far right panel shows an idealized electrodogram representing the electrical stimulation patterns for a CI. Electrodograms were simulated using default stimulation parameters for the Cochlear Freedom and Nucleus-24 devices: 900 Hz/channel stimulation rate, 8 maxima, frequency allocation Table 6, etc.
Mentions: Fig. 2 shows the “rising” contour with 1-semitone (top row) or 3-semitone (bottom row) spacing. The far left side of Fig. 2 illustrates the different contours within the HA and CI frequency ranges. The original spectrogram of the contours is shown just to the right; differences in the extent of F0 range can be seen between the 1- and 3-semitone spacing conditions. Next right is a spectrogram of the contours processed by a hearing loss simulation (AngelSim from www.tigerspeech.com). A steeply sloping hearing loss was simulated (0 db HL at 125 Hz, 20 dB HL at 250 Hz, 60 db HL at 500 Hz, 60 dB HL at 1000 Hz, 100 dB HL at 2000 Hz, 120 dB HL at 4000 Hz, and 120 dB HL at 8000 Hz) for illustrative purposes only, and was not intended to represent any subject’s audiogram. Differences in high frequency harmonic information can be easily seen between the original and HA spectrograms. The far right of Fig. 2 shows electrodograms that represent the electrical stimulation patterns given the default stimulation parameters for the Cochlear Freedom and Nucleus 24 devices which employs an 8 of 22 channel selection strategy. [The electrograms for subjects S1, S6, S7, and S8 would be slightly different as they use an Advanced Bionics device (16 channels; no channel selection).] The y-axis represents electrodes from the apex (bottom) to the base (top). Differences in the stimulation pattern across notes can be seen with the 3-semitone spacing (bottom); with the 1-semitone spacing (top), the changes in the stimulation pattern are more subtle.

Bottom Line: In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only).Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent.The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.

View Article: PubMed Central - PubMed

Affiliation: Department of Biomedical Engineering, University of Southern California, Los Angeles, California, United States of America.

ABSTRACT
Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.

No MeSH data available.


Related in: MedlinePlus