Limits...
The timing of visual object categorization.

Mack ML, Palmeri TJ - Front Psychol (2011)

Bottom Line: By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first.Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition.We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, The University of Texas at Austin Austin, TX, USA.

ABSTRACT
AN OBJECT CAN BE CATEGORIZED AT DIFFERENT LEVELS OF ABSTRACTION: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction.

No MeSH data available.


Illustration of three computational models of visual object categorization. (A) Riesenhuber and Poggio's (2000) model assumes a hierarchy of visual processing that begins with encoding of low-level features and feature conjunctions, moves on to view-based representations and object representations, which are then mapped to task units representing different levels of categorization. (B) Cottrell and colleagues’ model of object recognition proposes a hierarchy of visual processing of Gabor filtering, principal components analysis (PCA), and then a neural network mapping PCA representations onto known categories of objects (e.g., Joyce and Cottrell, 2004). (C) Nosofsky and Palmeri's (1997) exemplar-based random walk (EBRW) model proposes that the perceptual representation of an object activates similar stored object representations in memory providing incremental and noisy evidence toward a categorization decision. A decision is made once the random walk accumulation of perceptual evidence reaches a threshold. In all three computational models, decisions about category or identity are at a late decision stage of the visual processing hierarchy.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3139955&req=5

Figure 3: Illustration of three computational models of visual object categorization. (A) Riesenhuber and Poggio's (2000) model assumes a hierarchy of visual processing that begins with encoding of low-level features and feature conjunctions, moves on to view-based representations and object representations, which are then mapped to task units representing different levels of categorization. (B) Cottrell and colleagues’ model of object recognition proposes a hierarchy of visual processing of Gabor filtering, principal components analysis (PCA), and then a neural network mapping PCA representations onto known categories of objects (e.g., Joyce and Cottrell, 2004). (C) Nosofsky and Palmeri's (1997) exemplar-based random walk (EBRW) model proposes that the perceptual representation of an object activates similar stored object representations in memory providing incremental and noisy evidence toward a categorization decision. A decision is made once the random walk accumulation of perceptual evidence reaches a threshold. In all three computational models, decisions about category or identity are at a late decision stage of the visual processing hierarchy.

Mentions: For example, one canonical model of visual object recognition (Riesenhuber and Poggio, 2000; Serre et al., 2007) assumes a hierarchy of visual processing that begins with low-level features and conjunctions of features, moves on to view-based representations and object representations, which are then mapped to known categories and identities of objects (Figure 3A). Basic-level categorization and subordinate-level identification are instantiated within the same output layer of the model. Similarly, Cottrell and colleagues (e.g., Joyce and Cottrell, 2004; Figure 3B) have proposed a hierarchy of visual processing that begins with Gabor filtering, goes through a stage of principal components analysis (PCA), and then a neural network mapping PCA representations onto known categories and identities of objects. Neither model has a basic-level categorization stage within the visual processing hierarchy.


The timing of visual object categorization.

Mack ML, Palmeri TJ - Front Psychol (2011)

Illustration of three computational models of visual object categorization. (A) Riesenhuber and Poggio's (2000) model assumes a hierarchy of visual processing that begins with encoding of low-level features and feature conjunctions, moves on to view-based representations and object representations, which are then mapped to task units representing different levels of categorization. (B) Cottrell and colleagues’ model of object recognition proposes a hierarchy of visual processing of Gabor filtering, principal components analysis (PCA), and then a neural network mapping PCA representations onto known categories of objects (e.g., Joyce and Cottrell, 2004). (C) Nosofsky and Palmeri's (1997) exemplar-based random walk (EBRW) model proposes that the perceptual representation of an object activates similar stored object representations in memory providing incremental and noisy evidence toward a categorization decision. A decision is made once the random walk accumulation of perceptual evidence reaches a threshold. In all three computational models, decisions about category or identity are at a late decision stage of the visual processing hierarchy.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3139955&req=5

Figure 3: Illustration of three computational models of visual object categorization. (A) Riesenhuber and Poggio's (2000) model assumes a hierarchy of visual processing that begins with encoding of low-level features and feature conjunctions, moves on to view-based representations and object representations, which are then mapped to task units representing different levels of categorization. (B) Cottrell and colleagues’ model of object recognition proposes a hierarchy of visual processing of Gabor filtering, principal components analysis (PCA), and then a neural network mapping PCA representations onto known categories of objects (e.g., Joyce and Cottrell, 2004). (C) Nosofsky and Palmeri's (1997) exemplar-based random walk (EBRW) model proposes that the perceptual representation of an object activates similar stored object representations in memory providing incremental and noisy evidence toward a categorization decision. A decision is made once the random walk accumulation of perceptual evidence reaches a threshold. In all three computational models, decisions about category or identity are at a late decision stage of the visual processing hierarchy.
Mentions: For example, one canonical model of visual object recognition (Riesenhuber and Poggio, 2000; Serre et al., 2007) assumes a hierarchy of visual processing that begins with low-level features and conjunctions of features, moves on to view-based representations and object representations, which are then mapped to known categories and identities of objects (Figure 3A). Basic-level categorization and subordinate-level identification are instantiated within the same output layer of the model. Similarly, Cottrell and colleagues (e.g., Joyce and Cottrell, 2004; Figure 3B) have proposed a hierarchy of visual processing that begins with Gabor filtering, goes through a stage of principal components analysis (PCA), and then a neural network mapping PCA representations onto known categories and identities of objects. Neither model has a basic-level categorization stage within the visual processing hierarchy.

Bottom Line: By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first.Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition.We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, The University of Texas at Austin Austin, TX, USA.

ABSTRACT
AN OBJECT CAN BE CATEGORIZED AT DIFFERENT LEVELS OF ABSTRACTION: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction.

No MeSH data available.