Limits...
Population coding of affect across stimuli, modalities and individuals.

Chikazoe J, Lee DH, Kriegeskorte N, Anderson AK - Nat. Neurosci. (2014)

Bottom Line: This valence code was distinct from low-level physical and high-level object properties.Although ventral temporal and anterior insular cortices supported valence codes specific to vision and taste, both the medial and lateral orbitofrontal cortices (OFC) maintained a valence code independent of sensory origin.Furthermore, only the OFC code could classify experienced affect across participants.

View Article: PubMed Central - PubMed

Affiliation: Human Neuroscience Institute, Department of Human Development, College of Human Ecology, Cornell University, Ithaca, New York, USA.

ABSTRACT
It remains unclear how the brain represents external objective sensory events alongside our internal subjective impressions of them--affect. Representational mapping of population activity evoked by complex scenes and basic tastes in humans revealed a neural code supporting a continuous axis of pleasant-to-unpleasant valence. This valence code was distinct from low-level physical and high-level object properties. Although ventral temporal and anterior insular cortices supported valence codes specific to vision and taste, both the medial and lateral orbitofrontal cortices (OFC) maintained a valence code independent of sensory origin. Furthermore, only the OFC code could classify experienced affect across participants. The entire valence spectrum was represented as a collective pattern in regional neural activity as sensory-specific and abstract codes, whereby the subjective quality of affect can be objectively quantified across stimuli, modalities and people.

Show MeSH

Related in: MedlinePlus

Cross-participant classification of items and affect. (a) Classification accuracies of cross-participant multivoxel patterns for specific items and subjective valence in the VTC (gray) and OFC (white). Each target item or valence was estimated by all other participants’ representation in a leave-one-out procedure. Performance was calculated by the target’s similarity to its estimate compared to all other trials in pairwise comparison (50% chance). For item classification, t test (OFC: t15 = 5.7, P =0.00008, VTC: t15 = 21.4, P = 2.4 × 10−12), paired t test (OFC vs. VTC: 15 = –15.9, P = 8.4 × 10−11). For valence classification, t test (OFC: t15 = 6.4, P = 0.00002, VTC: t15 = 2.0, P = 0.13), paired t test (OFC vs. VTC: t15 = 4.2, P = 0.0007). Bonferroni correction was applied, based on number of comparisons for each ROI (2 (ROI). A t test within a region was one-sided while paired t test was two-sided. n = 16 participants. (b) Relationship between classification accuracies and valence distance in the OFC. Accuracies increased monotonically as experienced valence across trials became more clearly differentiated for all conditions. ANOVA (visual: F1.4, 20.3 = 37.4, P = 5.6 × 10−6, gustatory: F1.3, 18.9 = 4.7, P = 0.033, visual × gustatory: F1.2, 18.6 = 9.7, P = 0.004, gustatory × visual: F1.4, 19.6 = 4.3, P = 0.04). Greenhouse-Geisser correction was applied since Mauchly’s test revealed violation of assumption of sphericity. For visual and visual by gustatory, n = 16 participants. For gustatory and gustatory × visual, n = 15 participants. Error bars represent s.e.m. *** P < 0.001, ** P < 0.01
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4317366&req=5

Figure 6: Cross-participant classification of items and affect. (a) Classification accuracies of cross-participant multivoxel patterns for specific items and subjective valence in the VTC (gray) and OFC (white). Each target item or valence was estimated by all other participants’ representation in a leave-one-out procedure. Performance was calculated by the target’s similarity to its estimate compared to all other trials in pairwise comparison (50% chance). For item classification, t test (OFC: t15 = 5.7, P =0.00008, VTC: t15 = 21.4, P = 2.4 × 10−12), paired t test (OFC vs. VTC: 15 = –15.9, P = 8.4 × 10−11). For valence classification, t test (OFC: t15 = 6.4, P = 0.00002, VTC: t15 = 2.0, P = 0.13), paired t test (OFC vs. VTC: t15 = 4.2, P = 0.0007). Bonferroni correction was applied, based on number of comparisons for each ROI (2 (ROI). A t test within a region was one-sided while paired t test was two-sided. n = 16 participants. (b) Relationship between classification accuracies and valence distance in the OFC. Accuracies increased monotonically as experienced valence across trials became more clearly differentiated for all conditions. ANOVA (visual: F1.4, 20.3 = 37.4, P = 5.6 × 10−6, gustatory: F1.3, 18.9 = 4.7, P = 0.033, visual × gustatory: F1.2, 18.6 = 9.7, P = 0.004, gustatory × visual: F1.4, 19.6 = 4.3, P = 0.04). Greenhouse-Geisser correction was applied since Mauchly’s test revealed violation of assumption of sphericity. For visual and visual by gustatory, n = 16 participants. For gustatory and gustatory × visual, n = 15 participants. Error bars represent s.e.m. *** P < 0.001, ** P < 0.01

Mentions: Lastly, we assessed whether valence in a specific individual corresponded to affect representations in others’ brains. As previous work has demonstrated that representational geometry of object categories in the VTC can be shared across participants 35,36, we first examined whether item-level (i.e., by picture) classification was possible by comparing each participant’s item-based representational similarity matrices to that estimated from all other participants in a leave-one-out procedure. We calculated the classification performance for each target picture as the percentage that its representation was more similar to its estimate, compared pairwise to all other picture representations (50% chance; for details, see Online Methods and Supplementary Fig. 7). We found that item-specific representations in the VTC were predicted very highly by the other participants’ representational map (80.1 ±1.4 % accuracy, t15 = 21.4, P = 2.4 × 10−12;Fig 6a). Cross-participant classification accuracy was also statistically significant in the OFC (54.7 ±0.8 % accuracy, t15 = 5.7, P = 0.00008); however, it was substantially reduced compared to the VTC (t15 = 15.9, P = 8.4 × 10−11), suggesting that item-specific information is more robustly represented and translatable across participants in the VTC compared to the OFC.


Population coding of affect across stimuli, modalities and individuals.

Chikazoe J, Lee DH, Kriegeskorte N, Anderson AK - Nat. Neurosci. (2014)

Cross-participant classification of items and affect. (a) Classification accuracies of cross-participant multivoxel patterns for specific items and subjective valence in the VTC (gray) and OFC (white). Each target item or valence was estimated by all other participants’ representation in a leave-one-out procedure. Performance was calculated by the target’s similarity to its estimate compared to all other trials in pairwise comparison (50% chance). For item classification, t test (OFC: t15 = 5.7, P =0.00008, VTC: t15 = 21.4, P = 2.4 × 10−12), paired t test (OFC vs. VTC: 15 = –15.9, P = 8.4 × 10−11). For valence classification, t test (OFC: t15 = 6.4, P = 0.00002, VTC: t15 = 2.0, P = 0.13), paired t test (OFC vs. VTC: t15 = 4.2, P = 0.0007). Bonferroni correction was applied, based on number of comparisons for each ROI (2 (ROI). A t test within a region was one-sided while paired t test was two-sided. n = 16 participants. (b) Relationship between classification accuracies and valence distance in the OFC. Accuracies increased monotonically as experienced valence across trials became more clearly differentiated for all conditions. ANOVA (visual: F1.4, 20.3 = 37.4, P = 5.6 × 10−6, gustatory: F1.3, 18.9 = 4.7, P = 0.033, visual × gustatory: F1.2, 18.6 = 9.7, P = 0.004, gustatory × visual: F1.4, 19.6 = 4.3, P = 0.04). Greenhouse-Geisser correction was applied since Mauchly’s test revealed violation of assumption of sphericity. For visual and visual by gustatory, n = 16 participants. For gustatory and gustatory × visual, n = 15 participants. Error bars represent s.e.m. *** P < 0.001, ** P < 0.01
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4317366&req=5

Figure 6: Cross-participant classification of items and affect. (a) Classification accuracies of cross-participant multivoxel patterns for specific items and subjective valence in the VTC (gray) and OFC (white). Each target item or valence was estimated by all other participants’ representation in a leave-one-out procedure. Performance was calculated by the target’s similarity to its estimate compared to all other trials in pairwise comparison (50% chance). For item classification, t test (OFC: t15 = 5.7, P =0.00008, VTC: t15 = 21.4, P = 2.4 × 10−12), paired t test (OFC vs. VTC: 15 = –15.9, P = 8.4 × 10−11). For valence classification, t test (OFC: t15 = 6.4, P = 0.00002, VTC: t15 = 2.0, P = 0.13), paired t test (OFC vs. VTC: t15 = 4.2, P = 0.0007). Bonferroni correction was applied, based on number of comparisons for each ROI (2 (ROI). A t test within a region was one-sided while paired t test was two-sided. n = 16 participants. (b) Relationship between classification accuracies and valence distance in the OFC. Accuracies increased monotonically as experienced valence across trials became more clearly differentiated for all conditions. ANOVA (visual: F1.4, 20.3 = 37.4, P = 5.6 × 10−6, gustatory: F1.3, 18.9 = 4.7, P = 0.033, visual × gustatory: F1.2, 18.6 = 9.7, P = 0.004, gustatory × visual: F1.4, 19.6 = 4.3, P = 0.04). Greenhouse-Geisser correction was applied since Mauchly’s test revealed violation of assumption of sphericity. For visual and visual by gustatory, n = 16 participants. For gustatory and gustatory × visual, n = 15 participants. Error bars represent s.e.m. *** P < 0.001, ** P < 0.01
Mentions: Lastly, we assessed whether valence in a specific individual corresponded to affect representations in others’ brains. As previous work has demonstrated that representational geometry of object categories in the VTC can be shared across participants 35,36, we first examined whether item-level (i.e., by picture) classification was possible by comparing each participant’s item-based representational similarity matrices to that estimated from all other participants in a leave-one-out procedure. We calculated the classification performance for each target picture as the percentage that its representation was more similar to its estimate, compared pairwise to all other picture representations (50% chance; for details, see Online Methods and Supplementary Fig. 7). We found that item-specific representations in the VTC were predicted very highly by the other participants’ representational map (80.1 ±1.4 % accuracy, t15 = 21.4, P = 2.4 × 10−12;Fig 6a). Cross-participant classification accuracy was also statistically significant in the OFC (54.7 ±0.8 % accuracy, t15 = 5.7, P = 0.00008); however, it was substantially reduced compared to the VTC (t15 = 15.9, P = 8.4 × 10−11), suggesting that item-specific information is more robustly represented and translatable across participants in the VTC compared to the OFC.

Bottom Line: This valence code was distinct from low-level physical and high-level object properties.Although ventral temporal and anterior insular cortices supported valence codes specific to vision and taste, both the medial and lateral orbitofrontal cortices (OFC) maintained a valence code independent of sensory origin.Furthermore, only the OFC code could classify experienced affect across participants.

View Article: PubMed Central - PubMed

Affiliation: Human Neuroscience Institute, Department of Human Development, College of Human Ecology, Cornell University, Ithaca, New York, USA.

ABSTRACT
It remains unclear how the brain represents external objective sensory events alongside our internal subjective impressions of them--affect. Representational mapping of population activity evoked by complex scenes and basic tastes in humans revealed a neural code supporting a continuous axis of pleasant-to-unpleasant valence. This valence code was distinct from low-level physical and high-level object properties. Although ventral temporal and anterior insular cortices supported valence codes specific to vision and taste, both the medial and lateral orbitofrontal cortices (OFC) maintained a valence code independent of sensory origin. Furthermore, only the OFC code could classify experienced affect across participants. The entire valence spectrum was represented as a collective pattern in regional neural activity as sensory-specific and abstract codes, whereby the subjective quality of affect can be objectively quantified across stimuli, modalities and people.

Show MeSH
Related in: MedlinePlus