Limits...
Sound frequency affects speech emotion perception: results from congenital amusia.

Lolli SL, Lewenstein AD, Basurto J, Winnik S, Loui P - Front Psychol (2015)

Bottom Line: Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls.No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech.Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, Program in Neuroscience and Behavior, Wesleyan University , Middletown, CT, USA.

ABSTRACT
Congenital amusics, or "tone-deaf" individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

No MeSH data available.


Related in: MedlinePlus

Emotional identification results from Mechanical Turk listeners of MBEP speech samples.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4561757&req=5

Figure 1: Emotional identification results from Mechanical Turk listeners of MBEP speech samples.

Mentions: Results from the norming study are shown in Figure 1. Subjects performed well above chance levels in all emotional categories, confirming that American subjects were able to identify emotion in Australian-accented speech.


Sound frequency affects speech emotion perception: results from congenital amusia.

Lolli SL, Lewenstein AD, Basurto J, Winnik S, Loui P - Front Psychol (2015)

Emotional identification results from Mechanical Turk listeners of MBEP speech samples.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4561757&req=5

Figure 1: Emotional identification results from Mechanical Turk listeners of MBEP speech samples.
Mentions: Results from the norming study are shown in Figure 1. Subjects performed well above chance levels in all emotional categories, confirming that American subjects were able to identify emotion in Australian-accented speech.

Bottom Line: Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls.No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech.Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, Program in Neuroscience and Behavior, Wesleyan University , Middletown, CT, USA.

ABSTRACT
Congenital amusics, or "tone-deaf" individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

No MeSH data available.


Related in: MedlinePlus