Limits...
Where's Waldo? How perceptual, cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene.

Chang HC, Grossberg S, Cao Y - Front Integr Neurosci (2014)

Bottom Line: What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects.A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories.These processes describe interactions among brain regions that include visual cortex, parietal cortex, inferotemporal cortex, prefrontal cortex (PFC), amygdala, basal ganglia (BG), and superior colliculus (SC).

View Article: PubMed Central - PubMed

Affiliation: Graduate Program in Cognitive and Neural Systems, Department of Mathematics, Center for Adaptive Systems, Center for Computational Neuroscience and Neural Technology, Boston University Boston, MA, USA.

ABSTRACT
The Where's Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where's Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex, inferotemporal cortex, prefrontal cortex (PFC), amygdala, basal ganglia (BG), and superior colliculus (SC).

No MeSH data available.


Related in: MedlinePlus

Search reaction times under different search conditions. The search reaction times are statistically computed in the eye movement map via bottom-up, cognitive primed, and motivational drive search mechanisms through a direct and an indirect route. Blue bars correspond to the direct route and red bars indicate the indirect route. The slowest RTs are in the bottom-up pathway via the indirect route (375 ± 50 ms). The simulation reaction times of the cognitive primed pathway (335 ± 40 ms) and motivational drive pathway via the indirect route (335 ± 45 ms) are similar. The RTs via the direct route are: bottom-up pathway (200 ± 10 ms), cognitive primed pathway (180 ± 5 ms), and motivational drive pathway (180 ± 5 ms), respectively. See the text for further discussion.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4060746&req=5

Figure 15: Search reaction times under different search conditions. The search reaction times are statistically computed in the eye movement map via bottom-up, cognitive primed, and motivational drive search mechanisms through a direct and an indirect route. Blue bars correspond to the direct route and red bars indicate the indirect route. The slowest RTs are in the bottom-up pathway via the indirect route (375 ± 50 ms). The simulation reaction times of the cognitive primed pathway (335 ± 40 ms) and motivational drive pathway via the indirect route (335 ± 45 ms) are similar. The RTs via the direct route are: bottom-up pathway (200 ± 10 ms), cognitive primed pathway (180 ± 5 ms), and motivational drive pathway (180 ± 5 ms), respectively. See the text for further discussion.

Mentions: Figure 15 shows the search reaction times across search trials. For example, the cellphone object in Figure 13A is set as a Waldo target and is simulated under different search pathways via either the direct or indirect route until Waldo is foveated. The bottom-up search pathway has longer search reaction times compared to the top-down cognitive primed and the motivational drive pathways. This is because the bottom-up pathways require more processing stage interactions (see Figure 5) to locate the target. In addition, the reaction time in the direct pathway is always shorter than in the indirect pathway because the indirect pathway has more stage interactions to compute the saccadic eye movement. The search reaction times of the direct route in each search mechanism are similar because the eye movement is activated via the learned pathway from the selected view-specific category and the interactions between categorical layers are the same, whereas the search reaction times in the indirect route are different for different targets due to the different surface contour strength of the various objects.


Where's Waldo? How perceptual, cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene.

Chang HC, Grossberg S, Cao Y - Front Integr Neurosci (2014)

Search reaction times under different search conditions. The search reaction times are statistically computed in the eye movement map via bottom-up, cognitive primed, and motivational drive search mechanisms through a direct and an indirect route. Blue bars correspond to the direct route and red bars indicate the indirect route. The slowest RTs are in the bottom-up pathway via the indirect route (375 ± 50 ms). The simulation reaction times of the cognitive primed pathway (335 ± 40 ms) and motivational drive pathway via the indirect route (335 ± 45 ms) are similar. The RTs via the direct route are: bottom-up pathway (200 ± 10 ms), cognitive primed pathway (180 ± 5 ms), and motivational drive pathway (180 ± 5 ms), respectively. See the text for further discussion.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4060746&req=5

Figure 15: Search reaction times under different search conditions. The search reaction times are statistically computed in the eye movement map via bottom-up, cognitive primed, and motivational drive search mechanisms through a direct and an indirect route. Blue bars correspond to the direct route and red bars indicate the indirect route. The slowest RTs are in the bottom-up pathway via the indirect route (375 ± 50 ms). The simulation reaction times of the cognitive primed pathway (335 ± 40 ms) and motivational drive pathway via the indirect route (335 ± 45 ms) are similar. The RTs via the direct route are: bottom-up pathway (200 ± 10 ms), cognitive primed pathway (180 ± 5 ms), and motivational drive pathway (180 ± 5 ms), respectively. See the text for further discussion.
Mentions: Figure 15 shows the search reaction times across search trials. For example, the cellphone object in Figure 13A is set as a Waldo target and is simulated under different search pathways via either the direct or indirect route until Waldo is foveated. The bottom-up search pathway has longer search reaction times compared to the top-down cognitive primed and the motivational drive pathways. This is because the bottom-up pathways require more processing stage interactions (see Figure 5) to locate the target. In addition, the reaction time in the direct pathway is always shorter than in the indirect pathway because the indirect pathway has more stage interactions to compute the saccadic eye movement. The search reaction times of the direct route in each search mechanism are similar because the eye movement is activated via the learned pathway from the selected view-specific category and the interactions between categorical layers are the same, whereas the search reaction times in the indirect route are different for different targets due to the different surface contour strength of the various objects.

Bottom Line: What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects.A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories.These processes describe interactions among brain regions that include visual cortex, parietal cortex, inferotemporal cortex, prefrontal cortex (PFC), amygdala, basal ganglia (BG), and superior colliculus (SC).

View Article: PubMed Central - PubMed

Affiliation: Graduate Program in Cognitive and Neural Systems, Department of Mathematics, Center for Adaptive Systems, Center for Computational Neuroscience and Neural Technology, Boston University Boston, MA, USA.

ABSTRACT
The Where's Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where's Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex, inferotemporal cortex, prefrontal cortex (PFC), amygdala, basal ganglia (BG), and superior colliculus (SC).

No MeSH data available.


Related in: MedlinePlus