Limits...
Sensory transformations and the use of multiple reference frames for reach planning.

McGuire LM, Sabes PN - Nat. Neurosci. (2009)

Bottom Line: This model incorporates the patterns of gaze-dependent errors that we found in our human psychophysics experiment when the sensory signals available for reach planning were varied.These results challenge the widely held ideas that error patterns directly reflect the reference frame of the underlying neural representation and that it is preferable to use a single common reference frame for movement planning.Furthermore, the presence of multiple reference frames allows for optimal use of available sensory information and explains task-dependent reweighting of sensory signals.

View Article: PubMed Central - PubMed

Affiliation: W. M. Keck Center for Integrative Neuroscience, Department of Physiology, and the Neuroscience Graduate Program, University of California, San Francisco, California, USA.

ABSTRACT
The sensory signals that drive movement planning arrive in a variety of 'reference frames', and integrating or comparing them requires sensory transformations. We propose a model in which the statistical properties of sensory signals and their transformations determine how these signals are used. This model incorporates the patterns of gaze-dependent errors that we found in our human psychophysics experiment when the sensory signals available for reach planning were varied. These results challenge the widely held ideas that error patterns directly reflect the reference frame of the underlying neural representation and that it is preferable to use a single common reference frame for movement planning. We found that gaze-dependent error patterns, often cited as evidence for retinotopic reach planning, can be explained by a transformation bias and are not exclusively linked to retinotopic representations. Furthermore, the presence of multiple reference frames allows for optimal use of available sensory information and explains task-dependent reweighting of sensory signals.

Show MeSH

Related in: MedlinePlus

Changes in sensory weighting with target modality. a) Mean angular error induced by artificial shifts in the visual feedback of the hand prior to movement onset. Data from Sober and Sabes2. Error bars represent standard error. Model predictions use the INTEG readout, parameters fit to our data. b) Relative weighting of visual vs. proprioceptive information about initial hand position in movement planning for reaches to VIS and PROP targets. Error bars represent standard deviation across subjects. Colored lines show model predictions for each readout scheme.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2749235&req=5

Figure 7: Changes in sensory weighting with target modality. a) Mean angular error induced by artificial shifts in the visual feedback of the hand prior to movement onset. Data from Sober and Sabes2. Error bars represent standard error. Model predictions use the INTEG readout, parameters fit to our data. b) Relative weighting of visual vs. proprioceptive information about initial hand position in movement planning for reaches to VIS and PROP targets. Error bars represent standard deviation across subjects. Colored lines show model predictions for each readout scheme.

Mentions: The model explains another very different empirical result, again without additional parameter fitting. Our lab has previously shown that visual information about initial hand location is weighted more heavily when reaching to visual targets (as in VIS/FB trials here) than when reaching to proprioceptive targets (as in PROP/FB trials here)2. We proposed that this sensory reweighting was due to the cost (e.g. variability) incurred by sensory transformations2. The present model makes this cost explicit, with quantitative predictions of the angular error that should result from artificial shifts in visual feedback. In both the empirical data and the INTEG readout predictions, visual feedback shifts had a weaker effect when reaching to proprioceptive targets than when reaching to visual targets (Fig 7a). This effect was quantified in terms of overall weighting of visual versus proprioceptive feedback (Fig. 7b), which was much greater for VIS targets than for PROP targets. In the model, this re-weighting was due to the tradeoff between the retinotopic and body-centered representations of the movement plan (Supplemental Fig. S8, online), evidenced by the fact that neither the RET nor BODY readout exhibited the effect. This result provides further support for the use of multiple representations in movement planning.


Sensory transformations and the use of multiple reference frames for reach planning.

McGuire LM, Sabes PN - Nat. Neurosci. (2009)

Changes in sensory weighting with target modality. a) Mean angular error induced by artificial shifts in the visual feedback of the hand prior to movement onset. Data from Sober and Sabes2. Error bars represent standard error. Model predictions use the INTEG readout, parameters fit to our data. b) Relative weighting of visual vs. proprioceptive information about initial hand position in movement planning for reaches to VIS and PROP targets. Error bars represent standard deviation across subjects. Colored lines show model predictions for each readout scheme.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2749235&req=5

Figure 7: Changes in sensory weighting with target modality. a) Mean angular error induced by artificial shifts in the visual feedback of the hand prior to movement onset. Data from Sober and Sabes2. Error bars represent standard error. Model predictions use the INTEG readout, parameters fit to our data. b) Relative weighting of visual vs. proprioceptive information about initial hand position in movement planning for reaches to VIS and PROP targets. Error bars represent standard deviation across subjects. Colored lines show model predictions for each readout scheme.
Mentions: The model explains another very different empirical result, again without additional parameter fitting. Our lab has previously shown that visual information about initial hand location is weighted more heavily when reaching to visual targets (as in VIS/FB trials here) than when reaching to proprioceptive targets (as in PROP/FB trials here)2. We proposed that this sensory reweighting was due to the cost (e.g. variability) incurred by sensory transformations2. The present model makes this cost explicit, with quantitative predictions of the angular error that should result from artificial shifts in visual feedback. In both the empirical data and the INTEG readout predictions, visual feedback shifts had a weaker effect when reaching to proprioceptive targets than when reaching to visual targets (Fig 7a). This effect was quantified in terms of overall weighting of visual versus proprioceptive feedback (Fig. 7b), which was much greater for VIS targets than for PROP targets. In the model, this re-weighting was due to the tradeoff between the retinotopic and body-centered representations of the movement plan (Supplemental Fig. S8, online), evidenced by the fact that neither the RET nor BODY readout exhibited the effect. This result provides further support for the use of multiple representations in movement planning.

Bottom Line: This model incorporates the patterns of gaze-dependent errors that we found in our human psychophysics experiment when the sensory signals available for reach planning were varied.These results challenge the widely held ideas that error patterns directly reflect the reference frame of the underlying neural representation and that it is preferable to use a single common reference frame for movement planning.Furthermore, the presence of multiple reference frames allows for optimal use of available sensory information and explains task-dependent reweighting of sensory signals.

View Article: PubMed Central - PubMed

Affiliation: W. M. Keck Center for Integrative Neuroscience, Department of Physiology, and the Neuroscience Graduate Program, University of California, San Francisco, California, USA.

ABSTRACT
The sensory signals that drive movement planning arrive in a variety of 'reference frames', and integrating or comparing them requires sensory transformations. We propose a model in which the statistical properties of sensory signals and their transformations determine how these signals are used. This model incorporates the patterns of gaze-dependent errors that we found in our human psychophysics experiment when the sensory signals available for reach planning were varied. These results challenge the widely held ideas that error patterns directly reflect the reference frame of the underlying neural representation and that it is preferable to use a single common reference frame for movement planning. We found that gaze-dependent error patterns, often cited as evidence for retinotopic reach planning, can be explained by a transformation bias and are not exclusively linked to retinotopic representations. Furthermore, the presence of multiple reference frames allows for optimal use of available sensory information and explains task-dependent reweighting of sensory signals.

Show MeSH
Related in: MedlinePlus