Limits...
Adaptive tuning functions arise from visual observation of past movement.

Howard IS, Franklin DW - Sci Rep (2016)

Bottom Line: Both the adaptation movement and contextual movement exhibited generalization beyond the training direction, with the visual contextual motion exhibiting much broader tuning.A second experiment demonstrated that this pattern was consistent with the results of an interference experiment where opposing force fields were associated with two separate visual movements.Overall, our study shows that visual contextual motion exhibits much broader (and shallower) tuning functions than previously seen for either passive or active movements, demonstrating that the tuning characteristics of past motion are highly dependent on their sensory modality.

View Article: PubMed Central - PubMed

Affiliation: Centre for Robotics and Neural Systems, School of Computing, Electronics and Mathematics, University of Plymouth, Plymouth, United Kingdom.

ABSTRACT
Visual observation of movement plays a key role in action. For example, tennis players have little time to react to the ball, but still need to prepare the appropriate stroke. Therefore, it might be useful to use visual information about the ball trajectory to recall a specific motor memory. Past visual observation of movement (as well as passive and active arm movement) affects the learning and recall of motor memories. Moreover, when passive or active, these past contextual movements exhibit generalization (or tuning) across movement directions. Here we extend this work, examining whether visual motion also exhibits similar generalization across movement directions and whether such generalization functions can explain patterns of interference. Both the adaptation movement and contextual movement exhibited generalization beyond the training direction, with the visual contextual motion exhibiting much broader tuning. A second experiment demonstrated that this pattern was consistent with the results of an interference experiment where opposing force fields were associated with two separate visual movements. Overall, our study shows that visual contextual motion exhibits much broader (and shallower) tuning functions than previously seen for either passive or active movements, demonstrating that the tuning characteristics of past motion are highly dependent on their sensory modality.

No MeSH data available.


Related in: MedlinePlus

Single field tuning experimental results.(a) Mean kinematic error and SE per block across participants (solid line and shaded region respectively) for the adaptation movements. The light blue shading indicates when the curl force field is applied. (b) Percentage mean and SE of predictive force compensation measured using a channel towards the training target. (c) Adaptation movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of a von Mises fitted curve are plotted as a purple line and shaded purple region respectively. (d) Contextual movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of a von Mises fitted curve are plotted as a orange line and shaded orange region respectively.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4920033&req=5

f2: Single field tuning experimental results.(a) Mean kinematic error and SE per block across participants (solid line and shaded region respectively) for the adaptation movements. The light blue shading indicates when the curl force field is applied. (b) Percentage mean and SE of predictive force compensation measured using a channel towards the training target. (c) Adaptation movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of a von Mises fitted curve are plotted as a purple line and shaded purple region respectively. (d) Contextual movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of a von Mises fitted curve are plotted as a orange line and shaded orange region respectively.

Mentions: The first experiment examined the directional tuning of both contextual visual motion and the active adaptive movements using a single curl field-learning task. In this experiment the past visual motion context is not required in order to learn the compensation (Fig. 1). Each trial consisted of a two-part movement. While the participant’s hand remained at the central location, participants experienced a 10 cm visual contextual movement of the cursor from a starting position to the central location. Immediately afterwards the participants were required to make a 12 cm active adaptation movement from this central location to a final target located straight ahead. This active movement could be performed in a field, a mechanical channel, or a curl force field (Fig. 1a right). During the initial pre-exposure phase, movements were essentially straight (Fig. 2a). After field exposure onset, MPE increased dramatically with a rapid adaptation back towards field MPE levels. After the curl-field was removed during the post-exposure phase, strong after-effects were seen which quickly washed-out. Predictive force compensation (Fig. 2b) also indicated rapid learning during field exposure, which rose to over 80% by the end of exposure and quickly decayed after field removal. The dwell time between the prior visual contextual movements and the adaptation movements was 78.7 ± 9.3 ms (mean ± SE).


Adaptive tuning functions arise from visual observation of past movement.

Howard IS, Franklin DW - Sci Rep (2016)

Single field tuning experimental results.(a) Mean kinematic error and SE per block across participants (solid line and shaded region respectively) for the adaptation movements. The light blue shading indicates when the curl force field is applied. (b) Percentage mean and SE of predictive force compensation measured using a channel towards the training target. (c) Adaptation movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of a von Mises fitted curve are plotted as a purple line and shaded purple region respectively. (d) Contextual movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of a von Mises fitted curve are plotted as a orange line and shaded orange region respectively.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4920033&req=5

f2: Single field tuning experimental results.(a) Mean kinematic error and SE per block across participants (solid line and shaded region respectively) for the adaptation movements. The light blue shading indicates when the curl force field is applied. (b) Percentage mean and SE of predictive force compensation measured using a channel towards the training target. (c) Adaptation movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of a von Mises fitted curve are plotted as a purple line and shaded purple region respectively. (d) Contextual movement tuning curve expressed as mean and SE percentage compensation (blue). The mean and s.d. of a von Mises fitted curve are plotted as a orange line and shaded orange region respectively.
Mentions: The first experiment examined the directional tuning of both contextual visual motion and the active adaptive movements using a single curl field-learning task. In this experiment the past visual motion context is not required in order to learn the compensation (Fig. 1). Each trial consisted of a two-part movement. While the participant’s hand remained at the central location, participants experienced a 10 cm visual contextual movement of the cursor from a starting position to the central location. Immediately afterwards the participants were required to make a 12 cm active adaptation movement from this central location to a final target located straight ahead. This active movement could be performed in a field, a mechanical channel, or a curl force field (Fig. 1a right). During the initial pre-exposure phase, movements were essentially straight (Fig. 2a). After field exposure onset, MPE increased dramatically with a rapid adaptation back towards field MPE levels. After the curl-field was removed during the post-exposure phase, strong after-effects were seen which quickly washed-out. Predictive force compensation (Fig. 2b) also indicated rapid learning during field exposure, which rose to over 80% by the end of exposure and quickly decayed after field removal. The dwell time between the prior visual contextual movements and the adaptation movements was 78.7 ± 9.3 ms (mean ± SE).

Bottom Line: Both the adaptation movement and contextual movement exhibited generalization beyond the training direction, with the visual contextual motion exhibiting much broader tuning.A second experiment demonstrated that this pattern was consistent with the results of an interference experiment where opposing force fields were associated with two separate visual movements.Overall, our study shows that visual contextual motion exhibits much broader (and shallower) tuning functions than previously seen for either passive or active movements, demonstrating that the tuning characteristics of past motion are highly dependent on their sensory modality.

View Article: PubMed Central - PubMed

Affiliation: Centre for Robotics and Neural Systems, School of Computing, Electronics and Mathematics, University of Plymouth, Plymouth, United Kingdom.

ABSTRACT
Visual observation of movement plays a key role in action. For example, tennis players have little time to react to the ball, but still need to prepare the appropriate stroke. Therefore, it might be useful to use visual information about the ball trajectory to recall a specific motor memory. Past visual observation of movement (as well as passive and active arm movement) affects the learning and recall of motor memories. Moreover, when passive or active, these past contextual movements exhibit generalization (or tuning) across movement directions. Here we extend this work, examining whether visual motion also exhibits similar generalization across movement directions and whether such generalization functions can explain patterns of interference. Both the adaptation movement and contextual movement exhibited generalization beyond the training direction, with the visual contextual motion exhibiting much broader tuning. A second experiment demonstrated that this pattern was consistent with the results of an interference experiment where opposing force fields were associated with two separate visual movements. Overall, our study shows that visual contextual motion exhibits much broader (and shallower) tuning functions than previously seen for either passive or active movements, demonstrating that the tuning characteristics of past motion are highly dependent on their sensory modality.

No MeSH data available.


Related in: MedlinePlus