Limits...
Common cues to emotion in the dynamic facial expressions of speech and song.

Livingstone SR, Thompson WF, Wanderley MM, Palmer C - Q J Exp Psychol (Hove) (2014)

Bottom Line: In three experiments, we compared moving facial expressions in speech and song.Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings.Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

View Article: PubMed Central - PubMed

Affiliation: a Department of Psychology , McGill University , Montreal , QC , Canada H3A 1B1.

ABSTRACT
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

No MeSH data available.


Related in: MedlinePlus

Mean unbiased hit rates by emotion and epoch in Experiment 2 for Speech and Song. Error bars denote the standard error of the means.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4440649&req=5

Figure 0005: Mean unbiased hit rates by emotion and epoch in Experiment 2 for Speech and Song. Error bars denote the standard error of the means.

Mentions: Participants’ mean unbiased hit rates are shown in Figure 5. A three-way ANOVA by channel (2), emotion (3), and epoch (3) was conducted on participants’ hit rate scores. No effect of channel was found, confirming that speech and song were identified with comparable recognition accuracy. A significant main effect of emotion was reported, F(2, 30) = 49.3, p < .001,  = .77. Post hoc comparisons (Tukey's honestly significant difference, HSD = .08, α = .05) confirmed that happy, M = .88, 95% confidence interval, CI [.83, .92] was identified significantly more accurately than sad, M = .74, 95% CI [.70, .81], and that both emotions were identified more accurately than neutral, M = .65, 95% CI [.57, .74]. A main effect of epoch was also reported, F(2, 30) = 40.96, p < .001,  = .73. Post hoc comparisons (Tukey's HSD = .05, α = .05) confirmed that emotions in the prevocal epoch M = .68, 95% CI [.60, .76] were identified significantly less accurately than those during the vocalization, M = .80, 95% CI [.74, .86], and Postvocal epochs, M = .79, 95% CI [.74, .84], supporting our hypothesis that emotions for postvocalize movements would be identified at or near the accuracy for vocalize movements, and above those of prevocal movements.Figure 5


Common cues to emotion in the dynamic facial expressions of speech and song.

Livingstone SR, Thompson WF, Wanderley MM, Palmer C - Q J Exp Psychol (Hove) (2014)

Mean unbiased hit rates by emotion and epoch in Experiment 2 for Speech and Song. Error bars denote the standard error of the means.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4440649&req=5

Figure 0005: Mean unbiased hit rates by emotion and epoch in Experiment 2 for Speech and Song. Error bars denote the standard error of the means.
Mentions: Participants’ mean unbiased hit rates are shown in Figure 5. A three-way ANOVA by channel (2), emotion (3), and epoch (3) was conducted on participants’ hit rate scores. No effect of channel was found, confirming that speech and song were identified with comparable recognition accuracy. A significant main effect of emotion was reported, F(2, 30) = 49.3, p < .001,  = .77. Post hoc comparisons (Tukey's honestly significant difference, HSD = .08, α = .05) confirmed that happy, M = .88, 95% confidence interval, CI [.83, .92] was identified significantly more accurately than sad, M = .74, 95% CI [.70, .81], and that both emotions were identified more accurately than neutral, M = .65, 95% CI [.57, .74]. A main effect of epoch was also reported, F(2, 30) = 40.96, p < .001,  = .73. Post hoc comparisons (Tukey's HSD = .05, α = .05) confirmed that emotions in the prevocal epoch M = .68, 95% CI [.60, .76] were identified significantly less accurately than those during the vocalization, M = .80, 95% CI [.74, .86], and Postvocal epochs, M = .79, 95% CI [.74, .84], supporting our hypothesis that emotions for postvocalize movements would be identified at or near the accuracy for vocalize movements, and above those of prevocal movements.Figure 5

Bottom Line: In three experiments, we compared moving facial expressions in speech and song.Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings.Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

View Article: PubMed Central - PubMed

Affiliation: a Department of Psychology , McGill University , Montreal , QC , Canada H3A 1B1.

ABSTRACT
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

No MeSH data available.


Related in: MedlinePlus