Limits...
Decoding individual natural scene representations during perception and imagery.

Johnson MR, Johnson MK - Front Hum Neurosci (2014)

Bottom Line: We found that item-specific information was represented in multiple scene-selective areas: the occipital place area (OPA), parahippocampal place area (PPA), retrosplenial cortex (RSC), and a scene-selective portion of the precuneus/intraparietal sulcus region (PCu/IPS).These results support findings from previous decoding analyses for other types of visual information and/or brain areas during imagery or working memory, and extend them to the case of visual scenes (and scene-selective cortex).This suggests that although decodable scene-relevant activity occurs in FFA during perception, FFA activity may not be a necessary (or even relevant) component of one's mental representation of visual scenes.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, Yale University New Haven, CT, USA.

ABSTRACT
We used a multi-voxel classification analysis of functional magnetic resonance imaging (fMRI) data to determine to what extent item-specific information about complex natural scenes is represented in several category-selective areas of human extrastriate visual cortex during visual perception and visual mental imagery. Participants in the scanner either viewed or were instructed to visualize previously memorized natural scene exemplars, and the neuroimaging data were subsequently subjected to a multi-voxel pattern analysis (MVPA) using a support vector machine (SVM) classifier. We found that item-specific information was represented in multiple scene-selective areas: the occipital place area (OPA), parahippocampal place area (PPA), retrosplenial cortex (RSC), and a scene-selective portion of the precuneus/intraparietal sulcus region (PCu/IPS). Furthermore, item-specific information from perceived scenes was re-instantiated during mental imagery of the same scenes. These results support findings from previous decoding analyses for other types of visual information and/or brain areas during imagery or working memory, and extend them to the case of visual scenes (and scene-selective cortex). Taken together, such findings support models suggesting that reflective mental processes are subserved by the re-instantiation of perceptual information in high-level visual cortex. We also examined activity in the fusiform face area (FFA) and found that it, too, contained significant item-specific scene information during perception, but not during mental imagery. This suggests that although decodable scene-relevant activity occurs in FFA during perception, FFA activity may not be a necessary (or even relevant) component of one's mental representation of visual scenes.

No MeSH data available.


Related in: MedlinePlus

Classification across all scene areas. Classification accuracy for Experiments 1 and 2 using voxels from all scene-selective ROIs. Analyses used 640 voxels per participant (4 scene-selective regions × 2 hemispheres × 80 voxels per region). Results are shown for classifying between individual scene items during perception (left bars), classifying between scenes during mental imagery (middle bars), and re-instantiation of perceptual information during mental imagery (right bars). All were significantly above chance (AUC = 0.5) for both experiments. **p < 0.01, ***p < 0.001. Error bars represent standard error of the mean (s.e.m.). See text and Table 1 for full statistics.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3921604&req=5

Figure 2: Classification across all scene areas. Classification accuracy for Experiments 1 and 2 using voxels from all scene-selective ROIs. Analyses used 640 voxels per participant (4 scene-selective regions × 2 hemispheres × 80 voxels per region). Results are shown for classifying between individual scene items during perception (left bars), classifying between scenes during mental imagery (middle bars), and re-instantiation of perceptual information during mental imagery (right bars). All were significantly above chance (AUC = 0.5) for both experiments. **p < 0.01, ***p < 0.001. Error bars represent standard error of the mean (s.e.m.). See text and Table 1 for full statistics.

Mentions: For each participant, we selected the peak voxel from each cluster corresponding to the approximate anatomical location of these ROIs in prior group analyses, and focused on a 10 mm-radius sphere around that peak voxel for each ROI (examples of all ROIs for four representative participants are shown in Figure 1C). Within each spherical ROI, we then selected only the 80 most scene-selective voxels (approximately 20% of the 410 voxels found in each 10 mm-radius sphere) for classifier analyses, in order to eliminate noise input from voxels that might contain white matter, empty space, or gray matter that was not strongly activated by scene stimuli (for one participant at one ROI, only 65 in-brain voxels were found within 10 mm of the peak voxel of that ROI, so only those 65 voxels were used). This 80-voxel figure was initially chosen as an informed estimate of the number of “good” gray matter voxels that could be expected to be contained in each 10 mm-radius, 410-voxel sphere. Subsequent analyses (conducted after the main analyses discussed below, using the a priori number of 80 voxels, were completed) compared the results from using 10, 20, 40, 80, 160, or 320 voxels per spherical ROI, and found that classification performance did effectively plateau at around 80 voxels for most ROIs (see Supplementary Figure 1), and in some cases decreased for 160 or 320 voxels relative to 80 voxels. Scene selectivity was assessed by using the t-statistic for the Scene > Face contrast of the GLM analysis of the unsmoothed localizer data. For the classification analyses of individual category-selective ROIs, all of which were found bilaterally for all participants, the 80 voxels from each hemisphere were combined for classification, so a total of 160 voxels were used for each area. For the classification analyses across all scene areas shown in Figure 2 (see Results), voxels from both hemispheres and all four ROIs were fed into the classifier. Thus, the classification across all scene areas shown in Figure 2 used (80 voxels) × (4 ROIs) × (2 hemispheres) = 640 voxels as input.


Decoding individual natural scene representations during perception and imagery.

Johnson MR, Johnson MK - Front Hum Neurosci (2014)

Classification across all scene areas. Classification accuracy for Experiments 1 and 2 using voxels from all scene-selective ROIs. Analyses used 640 voxels per participant (4 scene-selective regions × 2 hemispheres × 80 voxels per region). Results are shown for classifying between individual scene items during perception (left bars), classifying between scenes during mental imagery (middle bars), and re-instantiation of perceptual information during mental imagery (right bars). All were significantly above chance (AUC = 0.5) for both experiments. **p < 0.01, ***p < 0.001. Error bars represent standard error of the mean (s.e.m.). See text and Table 1 for full statistics.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3921604&req=5

Figure 2: Classification across all scene areas. Classification accuracy for Experiments 1 and 2 using voxels from all scene-selective ROIs. Analyses used 640 voxels per participant (4 scene-selective regions × 2 hemispheres × 80 voxels per region). Results are shown for classifying between individual scene items during perception (left bars), classifying between scenes during mental imagery (middle bars), and re-instantiation of perceptual information during mental imagery (right bars). All were significantly above chance (AUC = 0.5) for both experiments. **p < 0.01, ***p < 0.001. Error bars represent standard error of the mean (s.e.m.). See text and Table 1 for full statistics.
Mentions: For each participant, we selected the peak voxel from each cluster corresponding to the approximate anatomical location of these ROIs in prior group analyses, and focused on a 10 mm-radius sphere around that peak voxel for each ROI (examples of all ROIs for four representative participants are shown in Figure 1C). Within each spherical ROI, we then selected only the 80 most scene-selective voxels (approximately 20% of the 410 voxels found in each 10 mm-radius sphere) for classifier analyses, in order to eliminate noise input from voxels that might contain white matter, empty space, or gray matter that was not strongly activated by scene stimuli (for one participant at one ROI, only 65 in-brain voxels were found within 10 mm of the peak voxel of that ROI, so only those 65 voxels were used). This 80-voxel figure was initially chosen as an informed estimate of the number of “good” gray matter voxels that could be expected to be contained in each 10 mm-radius, 410-voxel sphere. Subsequent analyses (conducted after the main analyses discussed below, using the a priori number of 80 voxels, were completed) compared the results from using 10, 20, 40, 80, 160, or 320 voxels per spherical ROI, and found that classification performance did effectively plateau at around 80 voxels for most ROIs (see Supplementary Figure 1), and in some cases decreased for 160 or 320 voxels relative to 80 voxels. Scene selectivity was assessed by using the t-statistic for the Scene > Face contrast of the GLM analysis of the unsmoothed localizer data. For the classification analyses of individual category-selective ROIs, all of which were found bilaterally for all participants, the 80 voxels from each hemisphere were combined for classification, so a total of 160 voxels were used for each area. For the classification analyses across all scene areas shown in Figure 2 (see Results), voxels from both hemispheres and all four ROIs were fed into the classifier. Thus, the classification across all scene areas shown in Figure 2 used (80 voxels) × (4 ROIs) × (2 hemispheres) = 640 voxels as input.

Bottom Line: We found that item-specific information was represented in multiple scene-selective areas: the occipital place area (OPA), parahippocampal place area (PPA), retrosplenial cortex (RSC), and a scene-selective portion of the precuneus/intraparietal sulcus region (PCu/IPS).These results support findings from previous decoding analyses for other types of visual information and/or brain areas during imagery or working memory, and extend them to the case of visual scenes (and scene-selective cortex).This suggests that although decodable scene-relevant activity occurs in FFA during perception, FFA activity may not be a necessary (or even relevant) component of one's mental representation of visual scenes.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, Yale University New Haven, CT, USA.

ABSTRACT
We used a multi-voxel classification analysis of functional magnetic resonance imaging (fMRI) data to determine to what extent item-specific information about complex natural scenes is represented in several category-selective areas of human extrastriate visual cortex during visual perception and visual mental imagery. Participants in the scanner either viewed or were instructed to visualize previously memorized natural scene exemplars, and the neuroimaging data were subsequently subjected to a multi-voxel pattern analysis (MVPA) using a support vector machine (SVM) classifier. We found that item-specific information was represented in multiple scene-selective areas: the occipital place area (OPA), parahippocampal place area (PPA), retrosplenial cortex (RSC), and a scene-selective portion of the precuneus/intraparietal sulcus region (PCu/IPS). Furthermore, item-specific information from perceived scenes was re-instantiated during mental imagery of the same scenes. These results support findings from previous decoding analyses for other types of visual information and/or brain areas during imagery or working memory, and extend them to the case of visual scenes (and scene-selective cortex). Taken together, such findings support models suggesting that reflective mental processes are subserved by the re-instantiation of perceptual information in high-level visual cortex. We also examined activity in the fusiform face area (FFA) and found that it, too, contained significant item-specific scene information during perception, but not during mental imagery. This suggests that although decodable scene-relevant activity occurs in FFA during perception, FFA activity may not be a necessary (or even relevant) component of one's mental representation of visual scenes.

No MeSH data available.


Related in: MedlinePlus