Limits...
Sound frequency affects speech emotion perception: results from congenital amusia.

Lolli SL, Lewenstein AD, Basurto J, Winnik S, Loui P - Front Psychol (2015)

Bottom Line: Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls.No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech.Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, Program in Neuroscience and Behavior, Wesleyan University , Middletown, CT, USA.

ABSTRACT
Congenital amusics, or "tone-deaf" individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

No MeSH data available.


Related in: MedlinePlus

The relationship between log pitch discrimination threshold and emotional identification accuracy (A) in the low-pass condition and (B) in the unfiltered speech condition. Red squares: amusics; blue diamonds: controls. Dashed line indicates chance performance. (C) Accuracy in emotional identification in amusics and control subjects. **p < 0.01.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4561757&req=5

Figure 3: The relationship between log pitch discrimination threshold and emotional identification accuracy (A) in the low-pass condition and (B) in the unfiltered speech condition. Red squares: amusics; blue diamonds: controls. Dashed line indicates chance performance. (C) Accuracy in emotional identification in amusics and control subjects. **p < 0.01.

Mentions: Log pitch discrimination threshold was significantly correlated with emotional identification accuracy in the low-pass filtered condition [r(38) = –0.38, p = 0.015; Figure 3A] but not in the unfiltered speech condition [r(38) = 0.04, n.s.; Figure 3B].


Sound frequency affects speech emotion perception: results from congenital amusia.

Lolli SL, Lewenstein AD, Basurto J, Winnik S, Loui P - Front Psychol (2015)

The relationship between log pitch discrimination threshold and emotional identification accuracy (A) in the low-pass condition and (B) in the unfiltered speech condition. Red squares: amusics; blue diamonds: controls. Dashed line indicates chance performance. (C) Accuracy in emotional identification in amusics and control subjects. **p < 0.01.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4561757&req=5

Figure 3: The relationship between log pitch discrimination threshold and emotional identification accuracy (A) in the low-pass condition and (B) in the unfiltered speech condition. Red squares: amusics; blue diamonds: controls. Dashed line indicates chance performance. (C) Accuracy in emotional identification in amusics and control subjects. **p < 0.01.
Mentions: Log pitch discrimination threshold was significantly correlated with emotional identification accuracy in the low-pass filtered condition [r(38) = –0.38, p = 0.015; Figure 3A] but not in the unfiltered speech condition [r(38) = 0.04, n.s.; Figure 3B].

Bottom Line: Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls.No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech.Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, Program in Neuroscience and Behavior, Wesleyan University , Middletown, CT, USA.

ABSTRACT
Congenital amusics, or "tone-deaf" individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

No MeSH data available.


Related in: MedlinePlus