Limits...
Introducing RISC: A New Video Inventory for Testing Social Perception.

Rothermich K, Pell MD - PLoS ONE (2015)

Bottom Line: Stimuli carefully manipulated the social relationship between communication partners (e.g., boss/employee, couple) and the availability of contextual cues (e.g. preceding conversations, physical objects) while controlling for major differences in the linguistic content of matched items.Here, we present initial perceptual validation data (N = 31) on a corpus of 920 items.Overall accuracy for identifying speaker intentions was above 80% correct and our results show that both relationship type and verbal context influence the categorization of literal and nonliteral interactions, underscoring the importance of these factors in research on speaker intentions.

View Article: PubMed Central - PubMed

Affiliation: School of Communication Sciences and Disorders, McGill University, Montreal, Canada; Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada.

ABSTRACT
Indirect forms of speech, such as sarcasm, jocularity (joking), and 'white lies' told to spare another's feelings, occur frequently in daily life and are a problem for many clinical populations. During social interactions, information about the literal or nonliteral meaning of a speaker unfolds simultaneously in several communication channels (e.g., linguistic, facial, vocal, and body cues); however, to date many studies have employed uni-modal stimuli, for example focusing only on the visual modality, limiting the generalizability of these results to everyday communication. Much of this research also neglects key factors for interpreting speaker intentions, such as verbal context and the relationship of social partners. Relational Inference in Social Communication (RISC) is a newly developed (English-language) database composed of short video vignettes depicting sincere, jocular, sarcastic, and white lie social exchanges between two people. Stimuli carefully manipulated the social relationship between communication partners (e.g., boss/employee, couple) and the availability of contextual cues (e.g. preceding conversations, physical objects) while controlling for major differences in the linguistic content of matched items. Here, we present initial perceptual validation data (N = 31) on a corpus of 920 items. Overall accuracy for identifying speaker intentions was above 80% correct and our results show that both relationship type and verbal context influence the categorization of literal and nonliteral interactions, underscoring the importance of these factors in research on speaker intentions. We believe that RISC will prove highly constructive as a tool in future research on social cognition, inter-personal communication, and the interpretation of speaker intentions in both healthy adults and clinical populations.

No MeSH data available.


Related in: MedlinePlus

Accuracy in Hu scores.Hu scores for scenes with and without verbal context, displayed by intention type and relationship.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4520563&req=5

pone.0133902.g002: Accuracy in Hu scores.Hu scores for scenes with and without verbal context, displayed by intention type and relationship.

Mentions: Statistical analyses were performed solely on the unbiased accuracy scores (Hu scores) summarized in Table 2 as well as Fig 2; note that calculation of Hu scores involved collapsing across literal positive and literal negative items, since the validation experiment required participants to decide between four different intentions (literal, jocularity, sarcasm, white lies), while correcting for the imbalance of tokens representing each intended response category in the perceptual experiment (i.e., literal = 368 items, jocularity, sarcasm, white lies = 184 items per category). The unbiased recognition scores were submitted to a 4 x 4 x 2 ANOVA with repeated factors of INTENTION (literal, sarcasm, jocularity, white lie), RELATIONSHIP (couple, friends, colleagues, boss/employee) and CONTEXT (verbal context, no verbal context). The ANOVA revealed main effects of INTENTION (F(3,28) = 84.15,p < .001), RELATIONSHIP (F(3,28) = 16.90,p < .001), and CONTEXT (F(1,30) = 95.53,p < .001). Pairwise comparisons to explore the INTENTION main effect confirmed that participants were significantly better overall in recognizing literal (M = 0.90) versus nonliteral utterances (sarcasm: M = 0.68; white lies: M = 0.70); jocularity (M = 0.86) was also recognized significantly better than sarcasm and white lies. In addition, speaker intentions were recognized significantly better overall when social partners were friends (M = 0.82) compared to all other relationship types (which did not differ; couple: M = 0.77; colleagues: M = 0.77; boss: M = 0.78), and scenes with verbal context promoted more accurate recognition of intentions (M = 0.82) when compared to scenes without context (M = 0.75).


Introducing RISC: A New Video Inventory for Testing Social Perception.

Rothermich K, Pell MD - PLoS ONE (2015)

Accuracy in Hu scores.Hu scores for scenes with and without verbal context, displayed by intention type and relationship.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4520563&req=5

pone.0133902.g002: Accuracy in Hu scores.Hu scores for scenes with and without verbal context, displayed by intention type and relationship.
Mentions: Statistical analyses were performed solely on the unbiased accuracy scores (Hu scores) summarized in Table 2 as well as Fig 2; note that calculation of Hu scores involved collapsing across literal positive and literal negative items, since the validation experiment required participants to decide between four different intentions (literal, jocularity, sarcasm, white lies), while correcting for the imbalance of tokens representing each intended response category in the perceptual experiment (i.e., literal = 368 items, jocularity, sarcasm, white lies = 184 items per category). The unbiased recognition scores were submitted to a 4 x 4 x 2 ANOVA with repeated factors of INTENTION (literal, sarcasm, jocularity, white lie), RELATIONSHIP (couple, friends, colleagues, boss/employee) and CONTEXT (verbal context, no verbal context). The ANOVA revealed main effects of INTENTION (F(3,28) = 84.15,p < .001), RELATIONSHIP (F(3,28) = 16.90,p < .001), and CONTEXT (F(1,30) = 95.53,p < .001). Pairwise comparisons to explore the INTENTION main effect confirmed that participants were significantly better overall in recognizing literal (M = 0.90) versus nonliteral utterances (sarcasm: M = 0.68; white lies: M = 0.70); jocularity (M = 0.86) was also recognized significantly better than sarcasm and white lies. In addition, speaker intentions were recognized significantly better overall when social partners were friends (M = 0.82) compared to all other relationship types (which did not differ; couple: M = 0.77; colleagues: M = 0.77; boss: M = 0.78), and scenes with verbal context promoted more accurate recognition of intentions (M = 0.82) when compared to scenes without context (M = 0.75).

Bottom Line: Stimuli carefully manipulated the social relationship between communication partners (e.g., boss/employee, couple) and the availability of contextual cues (e.g. preceding conversations, physical objects) while controlling for major differences in the linguistic content of matched items.Here, we present initial perceptual validation data (N = 31) on a corpus of 920 items.Overall accuracy for identifying speaker intentions was above 80% correct and our results show that both relationship type and verbal context influence the categorization of literal and nonliteral interactions, underscoring the importance of these factors in research on speaker intentions.

View Article: PubMed Central - PubMed

Affiliation: School of Communication Sciences and Disorders, McGill University, Montreal, Canada; Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada.

ABSTRACT
Indirect forms of speech, such as sarcasm, jocularity (joking), and 'white lies' told to spare another's feelings, occur frequently in daily life and are a problem for many clinical populations. During social interactions, information about the literal or nonliteral meaning of a speaker unfolds simultaneously in several communication channels (e.g., linguistic, facial, vocal, and body cues); however, to date many studies have employed uni-modal stimuli, for example focusing only on the visual modality, limiting the generalizability of these results to everyday communication. Much of this research also neglects key factors for interpreting speaker intentions, such as verbal context and the relationship of social partners. Relational Inference in Social Communication (RISC) is a newly developed (English-language) database composed of short video vignettes depicting sincere, jocular, sarcastic, and white lie social exchanges between two people. Stimuli carefully manipulated the social relationship between communication partners (e.g., boss/employee, couple) and the availability of contextual cues (e.g. preceding conversations, physical objects) while controlling for major differences in the linguistic content of matched items. Here, we present initial perceptual validation data (N = 31) on a corpus of 920 items. Overall accuracy for identifying speaker intentions was above 80% correct and our results show that both relationship type and verbal context influence the categorization of literal and nonliteral interactions, underscoring the importance of these factors in research on speaker intentions. We believe that RISC will prove highly constructive as a tool in future research on social cognition, inter-personal communication, and the interpretation of speaker intentions in both healthy adults and clinical populations.

No MeSH data available.


Related in: MedlinePlus