Limits...
The processing of facial identity and expression is interactive, but dependent on task and experience.

Yankouskaya A, Humphreys GW, Rotshtein P - Front Hum Neurosci (2014)

Bottom Line: We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent.We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions.We propose that this change occurs as integrative processes are more efficient than parallel.

View Article: PubMed Central - PubMed

Affiliation: Cognitive Neuropsychology Centre, Department of Experimental Psychology, University of Oxford Oxford, UK.

ABSTRACT
Facial identity and emotional expression are two important sources of information for daily social interaction. However the link between these two aspects of face processing has been the focus of an unresolved debate for the past three decades. Three views have been advocated: (1) separate and parallel processing of identity and emotional expression signals derived from faces; (2) asymmetric processing with the computation of emotion in faces depending on facial identity coding but not vice versa; and (3) integrated processing of facial identity and emotion. We present studies with healthy participants that primarily apply methods from mathematical psychology, formally testing the relations between the processing of facial identity and emotion. Specifically, we focused on the "Garner" paradigm, the composite face effect and the divided attention tasks. We further ask whether the architecture of face-related processes is fixed or flexible and whether (and how) it can be shaped by experience. We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent. We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions. We propose that this change occurs as integrative processes are more efficient than parallel. Finally, we argue that the dynamic aspects of face processing need to be incorporated into theories in this field.

No MeSH data available.


Related in: MedlinePlus

An example of the stimuli in Yankouskaya et al. (2012). IE—a face containing both the target identity and the target emotional expression; I—a face containing the target identity but not the expression; E—a face containing target emotional expression; NT1–NT3 faces containing neither the target identity nor the target emotion. In this study we used faces from the NimStim database, but because of publication restriction on faces from that database, we presenting here other faces (taken from Ekman, 1993) as examples only.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4231971&req=5

Figure 1: An example of the stimuli in Yankouskaya et al. (2012). IE—a face containing both the target identity and the target emotional expression; I—a face containing the target identity but not the expression; E—a face containing target emotional expression; NT1–NT3 faces containing neither the target identity nor the target emotion. In this study we used faces from the NimStim database, but because of publication restriction on faces from that database, we presenting here other faces (taken from Ekman, 1993) as examples only.

Mentions: The Race Model and the capacity measure have been used in tests of independence vs. coactivation in the processing of facial identity and emotional expression. Yankouskaya et al. (2012) employed the divided attention task under conditions where participants had to detect target identities and target emotional expressions from photographs of a set target faces. Three of these photographs contained targets: stimulus 1 had both the target identity and the target emotion (i.e., redundant target); stimulus 2 contained the target identity and a non-target emotional expression; stimulus 3 contained the target emotional expression and a non-target identity (Figure 1). Three non-target faces were photographs of three different people, and expressed emotions different to those in target faces. Identity, gender and emotional expression information were varied across these studies.


The processing of facial identity and expression is interactive, but dependent on task and experience.

Yankouskaya A, Humphreys GW, Rotshtein P - Front Hum Neurosci (2014)

An example of the stimuli in Yankouskaya et al. (2012). IE—a face containing both the target identity and the target emotional expression; I—a face containing the target identity but not the expression; E—a face containing target emotional expression; NT1–NT3 faces containing neither the target identity nor the target emotion. In this study we used faces from the NimStim database, but because of publication restriction on faces from that database, we presenting here other faces (taken from Ekman, 1993) as examples only.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4231971&req=5

Figure 1: An example of the stimuli in Yankouskaya et al. (2012). IE—a face containing both the target identity and the target emotional expression; I—a face containing the target identity but not the expression; E—a face containing target emotional expression; NT1–NT3 faces containing neither the target identity nor the target emotion. In this study we used faces from the NimStim database, but because of publication restriction on faces from that database, we presenting here other faces (taken from Ekman, 1993) as examples only.
Mentions: The Race Model and the capacity measure have been used in tests of independence vs. coactivation in the processing of facial identity and emotional expression. Yankouskaya et al. (2012) employed the divided attention task under conditions where participants had to detect target identities and target emotional expressions from photographs of a set target faces. Three of these photographs contained targets: stimulus 1 had both the target identity and the target emotion (i.e., redundant target); stimulus 2 contained the target identity and a non-target emotional expression; stimulus 3 contained the target emotional expression and a non-target identity (Figure 1). Three non-target faces were photographs of three different people, and expressed emotions different to those in target faces. Identity, gender and emotional expression information were varied across these studies.

Bottom Line: We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent.We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions.We propose that this change occurs as integrative processes are more efficient than parallel.

View Article: PubMed Central - PubMed

Affiliation: Cognitive Neuropsychology Centre, Department of Experimental Psychology, University of Oxford Oxford, UK.

ABSTRACT
Facial identity and emotional expression are two important sources of information for daily social interaction. However the link between these two aspects of face processing has been the focus of an unresolved debate for the past three decades. Three views have been advocated: (1) separate and parallel processing of identity and emotional expression signals derived from faces; (2) asymmetric processing with the computation of emotion in faces depending on facial identity coding but not vice versa; and (3) integrated processing of facial identity and emotion. We present studies with healthy participants that primarily apply methods from mathematical psychology, formally testing the relations between the processing of facial identity and emotion. Specifically, we focused on the "Garner" paradigm, the composite face effect and the divided attention tasks. We further ask whether the architecture of face-related processes is fixed or flexible and whether (and how) it can be shaped by experience. We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent. We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions. We propose that this change occurs as integrative processes are more efficient than parallel. Finally, we argue that the dynamic aspects of face processing need to be incorporated into theories in this field.

No MeSH data available.


Related in: MedlinePlus