Limits...
The Enfacement Illusion Is Not Affected by Negative Facial Expressions.

Beck B, Cardini F, Làdavas E, Bertini C - PLoS ONE (2015)

Bottom Line: Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior.Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions.This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

View Article: PubMed Central - PubMed

Affiliation: Centro studi e ricerche in Neuroscienze Cognitive (CNC), University of Bologna, Cesena, Italy; Department of Psychology, University of Bologna, Bologna, Italy.

ABSTRACT
Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one's own face to assimilate another person's face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer's motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant's own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other's face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

No MeSH data available.


Related in: MedlinePlus

Diagram of an experimental session.Participants first watched an other-to-self morph video and pressed a button to stop it as soon as it began to look more like their face than the other person’s face. This was followed by a period of synchronous or asynchronous IMS, and then a repetition of the morph video post-IMS. Morph videos were black-and-white, but IMS videos were shown in full color. The individuals shown in this figure have given written informed consent (as outlined in the PLOS consent form) to have their likenesses published.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4546364&req=5

pone.0136273.g002: Diagram of an experimental session.Participants first watched an other-to-self morph video and pressed a button to stop it as soon as it began to look more like their face than the other person’s face. This was followed by a period of synchronous or asynchronous IMS, and then a repetition of the morph video post-IMS. Morph videos were black-and-white, but IMS videos were shown in full color. The individuals shown in this figure have given written informed consent (as outlined in the PLOS consent form) to have their likenesses published.

Mentions: Participants completed one synchronous and one asynchronous IMS session, separated by at least 1 hour. Each participant saw only one type of facial expression in the IMS videos, either neutral, fearful, or angry. The order in which participants completed the IMS conditions was counterbalanced between participants within each group, as was the assignment of each other-face to either the synchronous or asynchronous IMS condition. A diagram of an experimental session is shown in Fig 2. In each session, participants first saw a morph video that changed from 100% other-face to 100% self-face. They were instructed to stop the video as soon as it began to look more like their own face than the other person’s face by pressing the “M” key. After this response, they watched a 120-s video of the other person continuously expressing either fear, anger, or a neutral expression while being stroked on the left cheek with a cotton swab. Concurrently, the participant was stroked on the right cheek (for specular correspondence) with a cotton swab either in synchrony or 1-s asynchrony with the touch in the video. Participants were instructed to sit still, to watch the face for the duration of the IMS video, and to attend to both the seen and the felt touch. Immediately after the IMS period, participants saw the same morph video as before and responded to it according to the same instructions. Finally, participants completed the illusion questionnaire at the end of each session. Questionnaire items were presented in a random order.


The Enfacement Illusion Is Not Affected by Negative Facial Expressions.

Beck B, Cardini F, Làdavas E, Bertini C - PLoS ONE (2015)

Diagram of an experimental session.Participants first watched an other-to-self morph video and pressed a button to stop it as soon as it began to look more like their face than the other person’s face. This was followed by a period of synchronous or asynchronous IMS, and then a repetition of the morph video post-IMS. Morph videos were black-and-white, but IMS videos were shown in full color. The individuals shown in this figure have given written informed consent (as outlined in the PLOS consent form) to have their likenesses published.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4546364&req=5

pone.0136273.g002: Diagram of an experimental session.Participants first watched an other-to-self morph video and pressed a button to stop it as soon as it began to look more like their face than the other person’s face. This was followed by a period of synchronous or asynchronous IMS, and then a repetition of the morph video post-IMS. Morph videos were black-and-white, but IMS videos were shown in full color. The individuals shown in this figure have given written informed consent (as outlined in the PLOS consent form) to have their likenesses published.
Mentions: Participants completed one synchronous and one asynchronous IMS session, separated by at least 1 hour. Each participant saw only one type of facial expression in the IMS videos, either neutral, fearful, or angry. The order in which participants completed the IMS conditions was counterbalanced between participants within each group, as was the assignment of each other-face to either the synchronous or asynchronous IMS condition. A diagram of an experimental session is shown in Fig 2. In each session, participants first saw a morph video that changed from 100% other-face to 100% self-face. They were instructed to stop the video as soon as it began to look more like their own face than the other person’s face by pressing the “M” key. After this response, they watched a 120-s video of the other person continuously expressing either fear, anger, or a neutral expression while being stroked on the left cheek with a cotton swab. Concurrently, the participant was stroked on the right cheek (for specular correspondence) with a cotton swab either in synchrony or 1-s asynchrony with the touch in the video. Participants were instructed to sit still, to watch the face for the duration of the IMS video, and to attend to both the seen and the felt touch. Immediately after the IMS period, participants saw the same morph video as before and responded to it according to the same instructions. Finally, participants completed the illusion questionnaire at the end of each session. Questionnaire items were presented in a random order.

Bottom Line: Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior.Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions.This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

View Article: PubMed Central - PubMed

Affiliation: Centro studi e ricerche in Neuroscienze Cognitive (CNC), University of Bologna, Cesena, Italy; Department of Psychology, University of Bologna, Bologna, Italy.

ABSTRACT
Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one's own face to assimilate another person's face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer's motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant's own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other's face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

No MeSH data available.


Related in: MedlinePlus