Limits...
Motivation modulates visual attention: evidence from pupillometry.

Wykowska A, Anderl C, Schubö A, Hommel B - Front Psychol (2013)

Bottom Line: Increasing evidence suggests that action planning does not only affect the preparation and execution of overt actions but also "works back" to tune the perceptual system toward action-relevant information.We conclude that motivation and effort might play a crucial role in how much participants prepare for an action and activate action codes.The degree of activation of action codes in turn influences the observed action-related biases on perception.

View Article: PubMed Central - PubMed

Affiliation: Ludwig Maximilian University Munich, Germany.

ABSTRACT
Increasing evidence suggests that action planning does not only affect the preparation and execution of overt actions but also "works back" to tune the perceptual system toward action-relevant information. We investigated whether the amount of this impact of action planning on perceptual selection varies as a function of motivation for action, which was assessed online by means of pupillometry (Experiment 1) and visual analog scales (VAS, Experiment 2). Findings replicate the earlier observation that searching for size-defined targets is more efficient in the context of grasping than in the context of pointing movements (Wykowska et al., 2009). As expected, changes in tonic pupil size (reflecting changes in effort and motivation) across the sessions, as well as changes in motivation-related scores on the VAS were found to correlate with changes in the size of the action-perception congruency effect. We conclude that motivation and effort might play a crucial role in how much participants prepare for an action and activate action codes. The degree of activation of action codes in turn influences the observed action-related biases on perception.

No MeSH data available.


Related in: MedlinePlus

Trial sequence of Experiment 1 and 2. Trials started with a fixation mark (in Experiment 1 it was the continuous valid pupil signal of 300+ 300 ms), followed by one of the cues (pointing/grasping; 800 ms), which informed participants which movement they should prepare. After another fixation mark (600 ms), the search display (target/no target) appeared on the screen (100 ms), and was followed by another fixation mark. Four hundred milliseconds after response to the search task, the movement position cue (400 ms) appeared and participants performed the prepared movement on the respective paper cup.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3569841&req=5

Figure 1: Trial sequence of Experiment 1 and 2. Trials started with a fixation mark (in Experiment 1 it was the continuous valid pupil signal of 300+ 300 ms), followed by one of the cues (pointing/grasping; 800 ms), which informed participants which movement they should prepare. After another fixation mark (600 ms), the search display (target/no target) appeared on the screen (100 ms), and was followed by another fixation mark. Four hundred milliseconds after response to the search task, the movement position cue (400 ms) appeared and participants performed the prepared movement on the respective paper cup.

Mentions: In the paradigm used by Wykowska et al. (2009, see also Wykowska et al., 2011, 2012) participants had to first prepare for a grasping or a pointing movement (as indicated by a cue picture representing a grasping/pointing hand), then detect and report a target in a visual search display (size or luminance pop-out item), and only then carry out the prepared movement on an indicated object (see Figure 1, which depicts an adapted version of the task used in Wykowska et al., 2009). Importantly, the movement task and the visual search task were perceptually and motorically unrelated: the visual search display was presented on a computer screen and the response was to be made on a mouse key with the dominant hand while the movement was to be executed with the other hand on one of the items of a movement execution device (Wykowska et al., 2009, 2011) or on one of three cups positioned below the computer screen (Wykowska et al., 2011, 2012). The design consisted of two action-perception congruent pairs: grasping and size (visual search target defined by size) and pointing and luminance (visual search target defined by luminance), as it was assumed that size is a potentially relevant dimension for a grasping movement while luminance is related to localizing – which is inherently linked to pointing. Results showed action-perception congruency effects: detection of a given dimension was facilitated when a congruent movement was being prepared, relative to the incongruent movement. In more detail, detection of size targets was faster when grasping movement was prepared, as compared to the pointing movement; and the reverse pattern was observed for detection of luminance targets. The authors concluded that visual selection is biased by a so-called intentional weighting mechanism (Wykowska et al., 2009, 2012; Hommel, 2010; Memelink and Hommel, in press), which prioritizes perceptual processing in order to deliver potentially action-relevant perceptual dimensions for open parameters of online action control, such as hand aperture (Hommel, 2010). Given that in the paradigm of Wykowska and colleagues the movement object was indicated only after the search task, all parameters of the prepared action could not be fully specified before the search task. Therefore, the intentional weighting mechanism prioritized processing of those perceptual dimensions that might have been necessary for efficient online action control.


Motivation modulates visual attention: evidence from pupillometry.

Wykowska A, Anderl C, Schubö A, Hommel B - Front Psychol (2013)

Trial sequence of Experiment 1 and 2. Trials started with a fixation mark (in Experiment 1 it was the continuous valid pupil signal of 300+ 300 ms), followed by one of the cues (pointing/grasping; 800 ms), which informed participants which movement they should prepare. After another fixation mark (600 ms), the search display (target/no target) appeared on the screen (100 ms), and was followed by another fixation mark. Four hundred milliseconds after response to the search task, the movement position cue (400 ms) appeared and participants performed the prepared movement on the respective paper cup.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3569841&req=5

Figure 1: Trial sequence of Experiment 1 and 2. Trials started with a fixation mark (in Experiment 1 it was the continuous valid pupil signal of 300+ 300 ms), followed by one of the cues (pointing/grasping; 800 ms), which informed participants which movement they should prepare. After another fixation mark (600 ms), the search display (target/no target) appeared on the screen (100 ms), and was followed by another fixation mark. Four hundred milliseconds after response to the search task, the movement position cue (400 ms) appeared and participants performed the prepared movement on the respective paper cup.
Mentions: In the paradigm used by Wykowska et al. (2009, see also Wykowska et al., 2011, 2012) participants had to first prepare for a grasping or a pointing movement (as indicated by a cue picture representing a grasping/pointing hand), then detect and report a target in a visual search display (size or luminance pop-out item), and only then carry out the prepared movement on an indicated object (see Figure 1, which depicts an adapted version of the task used in Wykowska et al., 2009). Importantly, the movement task and the visual search task were perceptually and motorically unrelated: the visual search display was presented on a computer screen and the response was to be made on a mouse key with the dominant hand while the movement was to be executed with the other hand on one of the items of a movement execution device (Wykowska et al., 2009, 2011) or on one of three cups positioned below the computer screen (Wykowska et al., 2011, 2012). The design consisted of two action-perception congruent pairs: grasping and size (visual search target defined by size) and pointing and luminance (visual search target defined by luminance), as it was assumed that size is a potentially relevant dimension for a grasping movement while luminance is related to localizing – which is inherently linked to pointing. Results showed action-perception congruency effects: detection of a given dimension was facilitated when a congruent movement was being prepared, relative to the incongruent movement. In more detail, detection of size targets was faster when grasping movement was prepared, as compared to the pointing movement; and the reverse pattern was observed for detection of luminance targets. The authors concluded that visual selection is biased by a so-called intentional weighting mechanism (Wykowska et al., 2009, 2012; Hommel, 2010; Memelink and Hommel, in press), which prioritizes perceptual processing in order to deliver potentially action-relevant perceptual dimensions for open parameters of online action control, such as hand aperture (Hommel, 2010). Given that in the paradigm of Wykowska and colleagues the movement object was indicated only after the search task, all parameters of the prepared action could not be fully specified before the search task. Therefore, the intentional weighting mechanism prioritized processing of those perceptual dimensions that might have been necessary for efficient online action control.

Bottom Line: Increasing evidence suggests that action planning does not only affect the preparation and execution of overt actions but also "works back" to tune the perceptual system toward action-relevant information.We conclude that motivation and effort might play a crucial role in how much participants prepare for an action and activate action codes.The degree of activation of action codes in turn influences the observed action-related biases on perception.

View Article: PubMed Central - PubMed

Affiliation: Ludwig Maximilian University Munich, Germany.

ABSTRACT
Increasing evidence suggests that action planning does not only affect the preparation and execution of overt actions but also "works back" to tune the perceptual system toward action-relevant information. We investigated whether the amount of this impact of action planning on perceptual selection varies as a function of motivation for action, which was assessed online by means of pupillometry (Experiment 1) and visual analog scales (VAS, Experiment 2). Findings replicate the earlier observation that searching for size-defined targets is more efficient in the context of grasping than in the context of pointing movements (Wykowska et al., 2009). As expected, changes in tonic pupil size (reflecting changes in effort and motivation) across the sessions, as well as changes in motivation-related scores on the VAS were found to correlate with changes in the size of the action-perception congruency effect. We conclude that motivation and effort might play a crucial role in how much participants prepare for an action and activate action codes. The degree of activation of action codes in turn influences the observed action-related biases on perception.

No MeSH data available.


Related in: MedlinePlus