Limits...
A Sensitive and Specific Neural Signature for Picture-Induced Negative Affect.

Chang LJ, Gianaros PJ, Manuck SB, Krishnan A, Wager TD - PLoS Biol. (2015)

Bottom Line: The signature was comprised of mesoscale patterns spanning multiple cortical and subcortical systems, with no single system necessary or sufficient for predicting experience.Furthermore, it was not reducible to activity in traditional "emotion-related" regions (e.g., amygdala, insula) or resting-state networks (e.g., "salience," "default mode").Overall, this work identifies differentiable neural components of negative emotion and pain, providing a basis for new, brain-based taxonomies of affective processes.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology & Neuroscience, University of Colorado, Boulder, Colorado, United States of America.

ABSTRACT
Neuroimaging has identified many correlates of emotion but has not yet yielded brain representations predictive of the intensity of emotional experiences in individuals. We used machine learning to identify a sensitive and specific signature of emotional responses to aversive images. This signature predicted the intensity of negative emotion in individual participants in cross validation (n =121) and test (n = 61) samples (high-low emotion = 93.5% accuracy). It was unresponsive to physical pain (emotion-pain = 92% discriminative accuracy), demonstrating that it is not a representation of generalized arousal or salience. The signature was comprised of mesoscale patterns spanning multiple cortical and subcortical systems, with no single system necessary or sufficient for predicting experience. Furthermore, it was not reducible to activity in traditional "emotion-related" regions (e.g., amygdala, insula) or resting-state networks (e.g., "salience," "default mode"). Overall, this work identifies differentiable neural components of negative emotion and pain, providing a basis for new, brain-based taxonomies of affective processes.

No MeSH data available.


Within participant emotion prediction.This figure depicts results from our within-participant analysis, in which the PINES was retrained separately for each participant to predict ratings to individual photos. Panel A shows the voxels in the weight map that are consistently different from zero across participants using a one sample t test thresholded at p < 0.001 uncorrected. Panel B shows a histogram of standardized emotion predictions (correlation) for each participant. The dotted red line reflects the average cross validated PINES correlation for predicting each photo’s rating. Panel C depicts how well each participant’s ratings were predicted by the PINES (y-axis) versus an idiographically trained, cross-validated map using their individual brain data (x-axis). Each point on the graph reflects one participant. The dotted red line reflects the identity line. Any data point above the identity line indicates that the participant was better fit by the PINES than their own weight map.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4476709&req=5

pbio.1002180.g003: Within participant emotion prediction.This figure depicts results from our within-participant analysis, in which the PINES was retrained separately for each participant to predict ratings to individual photos. Panel A shows the voxels in the weight map that are consistently different from zero across participants using a one sample t test thresholded at p < 0.001 uncorrected. Panel B shows a histogram of standardized emotion predictions (correlation) for each participant. The dotted red line reflects the average cross validated PINES correlation for predicting each photo’s rating. Panel C depicts how well each participant’s ratings were predicted by the PINES (y-axis) versus an idiographically trained, cross-validated map using their individual brain data (x-axis). Each point on the graph reflects one participant. The dotted red line reflects the identity line. Any data point above the identity line indicates that the participant was better fit by the PINES than their own weight map.

Mentions: In addition, it is important to assess the degree of individual variability in the spatial pattern of the PINES. It is possible that some brain regions important for affect may be highly variable across participants in both interparticipant spatial registration and functional topography. Therefore, in this analysis, we looked at the performance of patterns trained on individual participant data. Overall, the individualized predictive maps were able to predict affect ratings on individual trials (mean cross validated r = 0.54 ± 0.02). Interestingly, the cross validated PINES performed significantly better than the within-subject patterns (mean trial-by-trial r = 0.66 ±0.01), t(120) = 6.28, p < 0.001 (Fig 3C). The relative high accuracy of the PINES can be attributed to larger amounts of between-participant than within-participant trial data. The spatial topography of the average within-participant predictive map was similar to the PINES (spatial correlation r = .37), though the peaks of the most predictive regions were more spatially diffuse (see Fig 3A, S1 Fig). No individual participant’s weight map was more spatially similar to the PINES than the group mean (average r = 0.11 ± 0.01), which suggests that the individualized maps were much noisier than the PINES. The tradeoff between using the group (PINES) to regularize predictions compared to the individual alone reflects a classic bias and variance tradeoff fundamental throughout statistics. Introducing some bias towards the group can reduce variance in estimation, improving estimates and predictions. This is the general principle underlying empirical Bayes estimation, which is widely used throughout the statistical sciences.


A Sensitive and Specific Neural Signature for Picture-Induced Negative Affect.

Chang LJ, Gianaros PJ, Manuck SB, Krishnan A, Wager TD - PLoS Biol. (2015)

Within participant emotion prediction.This figure depicts results from our within-participant analysis, in which the PINES was retrained separately for each participant to predict ratings to individual photos. Panel A shows the voxels in the weight map that are consistently different from zero across participants using a one sample t test thresholded at p < 0.001 uncorrected. Panel B shows a histogram of standardized emotion predictions (correlation) for each participant. The dotted red line reflects the average cross validated PINES correlation for predicting each photo’s rating. Panel C depicts how well each participant’s ratings were predicted by the PINES (y-axis) versus an idiographically trained, cross-validated map using their individual brain data (x-axis). Each point on the graph reflects one participant. The dotted red line reflects the identity line. Any data point above the identity line indicates that the participant was better fit by the PINES than their own weight map.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4476709&req=5

pbio.1002180.g003: Within participant emotion prediction.This figure depicts results from our within-participant analysis, in which the PINES was retrained separately for each participant to predict ratings to individual photos. Panel A shows the voxels in the weight map that are consistently different from zero across participants using a one sample t test thresholded at p < 0.001 uncorrected. Panel B shows a histogram of standardized emotion predictions (correlation) for each participant. The dotted red line reflects the average cross validated PINES correlation for predicting each photo’s rating. Panel C depicts how well each participant’s ratings were predicted by the PINES (y-axis) versus an idiographically trained, cross-validated map using their individual brain data (x-axis). Each point on the graph reflects one participant. The dotted red line reflects the identity line. Any data point above the identity line indicates that the participant was better fit by the PINES than their own weight map.
Mentions: In addition, it is important to assess the degree of individual variability in the spatial pattern of the PINES. It is possible that some brain regions important for affect may be highly variable across participants in both interparticipant spatial registration and functional topography. Therefore, in this analysis, we looked at the performance of patterns trained on individual participant data. Overall, the individualized predictive maps were able to predict affect ratings on individual trials (mean cross validated r = 0.54 ± 0.02). Interestingly, the cross validated PINES performed significantly better than the within-subject patterns (mean trial-by-trial r = 0.66 ±0.01), t(120) = 6.28, p < 0.001 (Fig 3C). The relative high accuracy of the PINES can be attributed to larger amounts of between-participant than within-participant trial data. The spatial topography of the average within-participant predictive map was similar to the PINES (spatial correlation r = .37), though the peaks of the most predictive regions were more spatially diffuse (see Fig 3A, S1 Fig). No individual participant’s weight map was more spatially similar to the PINES than the group mean (average r = 0.11 ± 0.01), which suggests that the individualized maps were much noisier than the PINES. The tradeoff between using the group (PINES) to regularize predictions compared to the individual alone reflects a classic bias and variance tradeoff fundamental throughout statistics. Introducing some bias towards the group can reduce variance in estimation, improving estimates and predictions. This is the general principle underlying empirical Bayes estimation, which is widely used throughout the statistical sciences.

Bottom Line: The signature was comprised of mesoscale patterns spanning multiple cortical and subcortical systems, with no single system necessary or sufficient for predicting experience.Furthermore, it was not reducible to activity in traditional "emotion-related" regions (e.g., amygdala, insula) or resting-state networks (e.g., "salience," "default mode").Overall, this work identifies differentiable neural components of negative emotion and pain, providing a basis for new, brain-based taxonomies of affective processes.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology & Neuroscience, University of Colorado, Boulder, Colorado, United States of America.

ABSTRACT
Neuroimaging has identified many correlates of emotion but has not yet yielded brain representations predictive of the intensity of emotional experiences in individuals. We used machine learning to identify a sensitive and specific signature of emotional responses to aversive images. This signature predicted the intensity of negative emotion in individual participants in cross validation (n =121) and test (n = 61) samples (high-low emotion = 93.5% accuracy). It was unresponsive to physical pain (emotion-pain = 92% discriminative accuracy), demonstrating that it is not a representation of generalized arousal or salience. The signature was comprised of mesoscale patterns spanning multiple cortical and subcortical systems, with no single system necessary or sufficient for predicting experience. Furthermore, it was not reducible to activity in traditional "emotion-related" regions (e.g., amygdala, insula) or resting-state networks (e.g., "salience," "default mode"). Overall, this work identifies differentiable neural components of negative emotion and pain, providing a basis for new, brain-based taxonomies of affective processes.

No MeSH data available.