Limits...
Adaptive tuning functions arise from visual observation of past movement.

Howard IS, Franklin DW - Sci Rep (2016)

Bottom Line: Both the adaptation movement and contextual movement exhibited generalization beyond the training direction, with the visual contextual motion exhibiting much broader tuning.A second experiment demonstrated that this pattern was consistent with the results of an interference experiment where opposing force fields were associated with two separate visual movements.Overall, our study shows that visual contextual motion exhibits much broader (and shallower) tuning functions than previously seen for either passive or active movements, demonstrating that the tuning characteristics of past motion are highly dependent on their sensory modality.

View Article: PubMed Central - PubMed

Affiliation: Centre for Robotics and Neural Systems, School of Computing, Electronics and Mathematics, University of Plymouth, Plymouth, United Kingdom.

ABSTRACT
Visual observation of movement plays a key role in action. For example, tennis players have little time to react to the ball, but still need to prepare the appropriate stroke. Therefore, it might be useful to use visual information about the ball trajectory to recall a specific motor memory. Past visual observation of movement (as well as passive and active arm movement) affects the learning and recall of motor memories. Moreover, when passive or active, these past contextual movements exhibit generalization (or tuning) across movement directions. Here we extend this work, examining whether visual motion also exhibits similar generalization across movement directions and whether such generalization functions can explain patterns of interference. Both the adaptation movement and contextual movement exhibited generalization beyond the training direction, with the visual contextual motion exhibiting much broader tuning. A second experiment demonstrated that this pattern was consistent with the results of an interference experiment where opposing force fields were associated with two separate visual movements. Overall, our study shows that visual contextual motion exhibits much broader (and shallower) tuning functions than previously seen for either passive or active movements, demonstrating that the tuning characteristics of past motion are highly dependent on their sensory modality.

No MeSH data available.


Related in: MedlinePlus

Interference tuning experimental results.(a) Mean kinematic error and SE per block across participants (solid line and shaded region respectively) for the adaptation movements using contextual movements at ±45°. The light blue shading indicates when the curl force field is applied. Although the two force fields produce error in the opposite directions, the sign of errors on trials on which the CCW field was presented have been reversed so that all errors in the direction of the force field are shown as positive. (b) Corresponding percentage mean and SE of predictive force compensation and (c) contextual movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of the best-fit von Mises functions are plotted in red. (d–f) As a-c but using contextual movements at ±15°.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4920033&req=5

f4: Interference tuning experimental results.(a) Mean kinematic error and SE per block across participants (solid line and shaded region respectively) for the adaptation movements using contextual movements at ±45°. The light blue shading indicates when the curl force field is applied. Although the two force fields produce error in the opposite directions, the sign of errors on trials on which the CCW field was presented have been reversed so that all errors in the direction of the force field are shown as positive. (b) Corresponding percentage mean and SE of predictive force compensation and (c) contextual movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of the best-fit von Mises functions are plotted in red. (d–f) As a-c but using contextual movements at ±15°.

Mentions: The second experiment investigated the directional tuning of contextual visual motion using an interference task that involved simultaneously learning two opposing curl fields2628. In this experiment the contextual effect of past visual cursor motion was essential to learn the appropriate compensation (Fig. 3). Two conditions were examined in separate groups of subjects, with visual contextual movements made either at ±45° or ±15° respectively. Each trial again consisted of a 2-part movement: an initial visual contextual movement followed by an active adaption movement. Each adaptation movement was associated with two possible visual cursor movement start locations (Fig. 3a,b left), which were predictive of the direction (CW or CCW) of the curl force field. In the ±45° condition, significant learning was observed during field exposure, as indicated by a large reduction in kinematic error (Fig. 4a) and a significant increase in predictive force compensation, with the latter reaching a value of over 80% (Fig. 4b). The ±15° group showed slower learning. This resulted in a slightly larger final level of kinematic error (F1,1198 = 25.8; p < 0.001) but with a final level of force compensation that was not significantly different from that of the ±45° group (F1,154 = 0.791; p = 0.375), see Fig. 4d,e. The dwell time between the prior visual contextual movements and the adaptation movements were 85.5 ± 14.6 ms and 95.8 ± 11.8 ms for the ±45° and the ±15° conditions respectively.


Adaptive tuning functions arise from visual observation of past movement.

Howard IS, Franklin DW - Sci Rep (2016)

Interference tuning experimental results.(a) Mean kinematic error and SE per block across participants (solid line and shaded region respectively) for the adaptation movements using contextual movements at ±45°. The light blue shading indicates when the curl force field is applied. Although the two force fields produce error in the opposite directions, the sign of errors on trials on which the CCW field was presented have been reversed so that all errors in the direction of the force field are shown as positive. (b) Corresponding percentage mean and SE of predictive force compensation and (c) contextual movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of the best-fit von Mises functions are plotted in red. (d–f) As a-c but using contextual movements at ±15°.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4920033&req=5

f4: Interference tuning experimental results.(a) Mean kinematic error and SE per block across participants (solid line and shaded region respectively) for the adaptation movements using contextual movements at ±45°. The light blue shading indicates when the curl force field is applied. Although the two force fields produce error in the opposite directions, the sign of errors on trials on which the CCW field was presented have been reversed so that all errors in the direction of the force field are shown as positive. (b) Corresponding percentage mean and SE of predictive force compensation and (c) contextual movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of the best-fit von Mises functions are plotted in red. (d–f) As a-c but using contextual movements at ±15°.
Mentions: The second experiment investigated the directional tuning of contextual visual motion using an interference task that involved simultaneously learning two opposing curl fields2628. In this experiment the contextual effect of past visual cursor motion was essential to learn the appropriate compensation (Fig. 3). Two conditions were examined in separate groups of subjects, with visual contextual movements made either at ±45° or ±15° respectively. Each trial again consisted of a 2-part movement: an initial visual contextual movement followed by an active adaption movement. Each adaptation movement was associated with two possible visual cursor movement start locations (Fig. 3a,b left), which were predictive of the direction (CW or CCW) of the curl force field. In the ±45° condition, significant learning was observed during field exposure, as indicated by a large reduction in kinematic error (Fig. 4a) and a significant increase in predictive force compensation, with the latter reaching a value of over 80% (Fig. 4b). The ±15° group showed slower learning. This resulted in a slightly larger final level of kinematic error (F1,1198 = 25.8; p < 0.001) but with a final level of force compensation that was not significantly different from that of the ±45° group (F1,154 = 0.791; p = 0.375), see Fig. 4d,e. The dwell time between the prior visual contextual movements and the adaptation movements were 85.5 ± 14.6 ms and 95.8 ± 11.8 ms for the ±45° and the ±15° conditions respectively.

Bottom Line: Both the adaptation movement and contextual movement exhibited generalization beyond the training direction, with the visual contextual motion exhibiting much broader tuning.A second experiment demonstrated that this pattern was consistent with the results of an interference experiment where opposing force fields were associated with two separate visual movements.Overall, our study shows that visual contextual motion exhibits much broader (and shallower) tuning functions than previously seen for either passive or active movements, demonstrating that the tuning characteristics of past motion are highly dependent on their sensory modality.

View Article: PubMed Central - PubMed

Affiliation: Centre for Robotics and Neural Systems, School of Computing, Electronics and Mathematics, University of Plymouth, Plymouth, United Kingdom.

ABSTRACT
Visual observation of movement plays a key role in action. For example, tennis players have little time to react to the ball, but still need to prepare the appropriate stroke. Therefore, it might be useful to use visual information about the ball trajectory to recall a specific motor memory. Past visual observation of movement (as well as passive and active arm movement) affects the learning and recall of motor memories. Moreover, when passive or active, these past contextual movements exhibit generalization (or tuning) across movement directions. Here we extend this work, examining whether visual motion also exhibits similar generalization across movement directions and whether such generalization functions can explain patterns of interference. Both the adaptation movement and contextual movement exhibited generalization beyond the training direction, with the visual contextual motion exhibiting much broader tuning. A second experiment demonstrated that this pattern was consistent with the results of an interference experiment where opposing force fields were associated with two separate visual movements. Overall, our study shows that visual contextual motion exhibits much broader (and shallower) tuning functions than previously seen for either passive or active movements, demonstrating that the tuning characteristics of past motion are highly dependent on their sensory modality.

No MeSH data available.


Related in: MedlinePlus