Limits...
The Enfacement Illusion Is Not Affected by Negative Facial Expressions.

Beck B, Cardini F, Làdavas E, Bertini C - PLoS ONE (2015)

Bottom Line: Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior.Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions.This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

View Article: PubMed Central - PubMed

Affiliation: Centro studi e ricerche in Neuroscienze Cognitive (CNC), University of Bologna, Cesena, Italy; Department of Psychology, University of Bologna, Bologna, Italy.

ABSTRACT
Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one's own face to assimilate another person's face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer's motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant's own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other's face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

No MeSH data available.


Related in: MedlinePlus

Frames from the angry, fearful, and neutral videos shown during IMS.Each participant saw videos from only one of the three facial expression categories. The assignment of each video to either the synchronous or the asynchronous IMS session was counterbalanced between participants. The individuals shown in this figure have given written informed consent (as outlined in the PLOS consent form) to have their likenesses published.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4546364&req=5

pone.0136273.g001: Frames from the angry, fearful, and neutral videos shown during IMS.Each participant saw videos from only one of the three facial expression categories. The assignment of each video to either the synchronous or the asynchronous IMS session was counterbalanced between participants. The individuals shown in this figure have given written informed consent (as outlined in the PLOS consent form) to have their likenesses published.

Mentions: Prior to the testing session, a photograph of each participant's face with a neutral expression was taken with a digital camera. The photographs were converted to black-and-white, mirror-transposed, and overlaid with an oval template on a black background to remove hair and ears. Photographs of six adult female volunteers who did not participate in the experiment were obtained and processed in the same manner (except for the mirror-transposition) for use as the other-faces. Participant and other-face photographs were matched in luminance. These photographs were then blended in Abrosoft Fantamorph 4 to create dynamic morph videos that progressed from 100% other-face to 100% self-face. The morph videos were 100 s long and progressed at a rate of 1% change in face per second, resulting in a prolonged and subtle morph. Each participant’s face was morphed with two of the other-faces. Note that both the faces of the participants and the other-faces had neutral expressions in the morph videos. Additionally, a camcorder was used to record videos of the other-faces being stroked on the left cheek with a cotton swab. Each video was in full color and 120 s long with strokes occuring approximately every 2 s. While being touched, each volunteer maintained a facial expression (fearful, angry, or neutral) for approximately 10 s, and this segment was then looped to produce the full 120 s video (Fig 1). To make the neutral videos appear more natural, and to ensure that all three video types showed some kind of facial movement, we created the neutral videos from looped segments that included eye blinks, mild head movements, and mild facial muscle contractions.


The Enfacement Illusion Is Not Affected by Negative Facial Expressions.

Beck B, Cardini F, Làdavas E, Bertini C - PLoS ONE (2015)

Frames from the angry, fearful, and neutral videos shown during IMS.Each participant saw videos from only one of the three facial expression categories. The assignment of each video to either the synchronous or the asynchronous IMS session was counterbalanced between participants. The individuals shown in this figure have given written informed consent (as outlined in the PLOS consent form) to have their likenesses published.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4546364&req=5

pone.0136273.g001: Frames from the angry, fearful, and neutral videos shown during IMS.Each participant saw videos from only one of the three facial expression categories. The assignment of each video to either the synchronous or the asynchronous IMS session was counterbalanced between participants. The individuals shown in this figure have given written informed consent (as outlined in the PLOS consent form) to have their likenesses published.
Mentions: Prior to the testing session, a photograph of each participant's face with a neutral expression was taken with a digital camera. The photographs were converted to black-and-white, mirror-transposed, and overlaid with an oval template on a black background to remove hair and ears. Photographs of six adult female volunteers who did not participate in the experiment were obtained and processed in the same manner (except for the mirror-transposition) for use as the other-faces. Participant and other-face photographs were matched in luminance. These photographs were then blended in Abrosoft Fantamorph 4 to create dynamic morph videos that progressed from 100% other-face to 100% self-face. The morph videos were 100 s long and progressed at a rate of 1% change in face per second, resulting in a prolonged and subtle morph. Each participant’s face was morphed with two of the other-faces. Note that both the faces of the participants and the other-faces had neutral expressions in the morph videos. Additionally, a camcorder was used to record videos of the other-faces being stroked on the left cheek with a cotton swab. Each video was in full color and 120 s long with strokes occuring approximately every 2 s. While being touched, each volunteer maintained a facial expression (fearful, angry, or neutral) for approximately 10 s, and this segment was then looped to produce the full 120 s video (Fig 1). To make the neutral videos appear more natural, and to ensure that all three video types showed some kind of facial movement, we created the neutral videos from looped segments that included eye blinks, mild head movements, and mild facial muscle contractions.

Bottom Line: Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior.Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions.This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

View Article: PubMed Central - PubMed

Affiliation: Centro studi e ricerche in Neuroscienze Cognitive (CNC), University of Bologna, Cesena, Italy; Department of Psychology, University of Bologna, Bologna, Italy.

ABSTRACT
Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one's own face to assimilate another person's face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer's motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant's own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other's face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

No MeSH data available.


Related in: MedlinePlus