Limits...
Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow.

Layton OW, Fajen BR - PLoS Comput. Biol. (2016)

Bottom Line: Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments.Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field.Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.

View Article: PubMed Central - PubMed

Affiliation: Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America.

ABSTRACT
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model's heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.

No MeSH data available.


Related in: MedlinePlus

Overview of the competitive dynamics model.The model consists of three main stages: change sensitivity (model Retina and LGN), motion detection (model V1 and MT+), and self-motion estimation (model MSTd). Units in model LGN respond to frame-to-frame pixel changes in the video input. The model generates an initial estimate of motion speed and direction in V1 simple cells, which receive input from spatially offset LGN units with a number of conduction delays. Similar to the motion sensitivity of cells in primate, the direction tuning at the stage of model simple cells is broad and coarse. The motion estimate is refined through spatial on-center/off-surround grouping of the motion signals by complex cells. Consistent motion signals are grouped over a larger spatial scale by units in MT+, which send feedback to inhibit complex cells that differ in direction selectivity. The activity distribution in MT+ is matched against a number of radial expansion and contraction templates with varying FoE/FoC positions, which serves as the input to model MSTd. Units in MSTd tuned to radial expansion and contraction with different preferred singularity positions compete with one another to resolve the self-motion direction. Pooling stages in the model are depicted by ‘Σ’ and the curve indicates thresholding and squaring operations.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4920404&req=5

pcbi.1004942.g011: Overview of the competitive dynamics model.The model consists of three main stages: change sensitivity (model Retina and LGN), motion detection (model V1 and MT+), and self-motion estimation (model MSTd). Units in model LGN respond to frame-to-frame pixel changes in the video input. The model generates an initial estimate of motion speed and direction in V1 simple cells, which receive input from spatially offset LGN units with a number of conduction delays. Similar to the motion sensitivity of cells in primate, the direction tuning at the stage of model simple cells is broad and coarse. The motion estimate is refined through spatial on-center/off-surround grouping of the motion signals by complex cells. Consistent motion signals are grouped over a larger spatial scale by units in MT+, which send feedback to inhibit complex cells that differ in direction selectivity. The activity distribution in MT+ is matched against a number of radial expansion and contraction templates with varying FoE/FoC positions, which serves as the input to model MSTd. Units in MSTd tuned to radial expansion and contraction with different preferred singularity positions compete with one another to resolve the self-motion direction. Pooling stages in the model are depicted by ‘Σ’ and the curve indicates thresholding and squaring operations.

Mentions: Fig 11 schematically depicts an overview of the model. The model consists of three main stages: detecting changes in luminance (model Retina and LGN), detecting motion (model V1 and MT+), and estimating self-motion (model MSTd). The details of these stages are described in the following sections. Fig 12 shows the response of each model area to simulated self-motion in a static environment toward two frontoparallel planes.


Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow.

Layton OW, Fajen BR - PLoS Comput. Biol. (2016)

Overview of the competitive dynamics model.The model consists of three main stages: change sensitivity (model Retina and LGN), motion detection (model V1 and MT+), and self-motion estimation (model MSTd). Units in model LGN respond to frame-to-frame pixel changes in the video input. The model generates an initial estimate of motion speed and direction in V1 simple cells, which receive input from spatially offset LGN units with a number of conduction delays. Similar to the motion sensitivity of cells in primate, the direction tuning at the stage of model simple cells is broad and coarse. The motion estimate is refined through spatial on-center/off-surround grouping of the motion signals by complex cells. Consistent motion signals are grouped over a larger spatial scale by units in MT+, which send feedback to inhibit complex cells that differ in direction selectivity. The activity distribution in MT+ is matched against a number of radial expansion and contraction templates with varying FoE/FoC positions, which serves as the input to model MSTd. Units in MSTd tuned to radial expansion and contraction with different preferred singularity positions compete with one another to resolve the self-motion direction. Pooling stages in the model are depicted by ‘Σ’ and the curve indicates thresholding and squaring operations.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4920404&req=5

pcbi.1004942.g011: Overview of the competitive dynamics model.The model consists of three main stages: change sensitivity (model Retina and LGN), motion detection (model V1 and MT+), and self-motion estimation (model MSTd). Units in model LGN respond to frame-to-frame pixel changes in the video input. The model generates an initial estimate of motion speed and direction in V1 simple cells, which receive input from spatially offset LGN units with a number of conduction delays. Similar to the motion sensitivity of cells in primate, the direction tuning at the stage of model simple cells is broad and coarse. The motion estimate is refined through spatial on-center/off-surround grouping of the motion signals by complex cells. Consistent motion signals are grouped over a larger spatial scale by units in MT+, which send feedback to inhibit complex cells that differ in direction selectivity. The activity distribution in MT+ is matched against a number of radial expansion and contraction templates with varying FoE/FoC positions, which serves as the input to model MSTd. Units in MSTd tuned to radial expansion and contraction with different preferred singularity positions compete with one another to resolve the self-motion direction. Pooling stages in the model are depicted by ‘Σ’ and the curve indicates thresholding and squaring operations.
Mentions: Fig 11 schematically depicts an overview of the model. The model consists of three main stages: detecting changes in luminance (model Retina and LGN), detecting motion (model V1 and MT+), and estimating self-motion (model MSTd). The details of these stages are described in the following sections. Fig 12 shows the response of each model area to simulated self-motion in a static environment toward two frontoparallel planes.

Bottom Line: Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments.Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field.Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.

View Article: PubMed Central - PubMed

Affiliation: Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America.

ABSTRACT
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model's heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.

No MeSH data available.


Related in: MedlinePlus