Limits...
Using time-to-contact information to assess potential collision modulates both visual and temporal prediction networks.

Coull JT, Vidal F, Goulon C, Nazarian B, Craig C - Front Hum Neurosci (2008)

Bottom Line: We also demonstrated that the temporal derivative of the perceptual index tau (tau-dot) held predictive value for making collision judgements and varied inversely with activity in primary visual cortex (V1).Finally, egocentric viewpoints provoked a response bias for reporting collisions, rather than no-collisions, reflecting increased caution for head-on approaches.Associated increases in SMA activity suggest motor preparation mechanisms were engaged, despite the perceptual nature of the task.

View Article: PubMed Central - PubMed

Affiliation: Laboratoire de Neurobiologie de la Cognition, Université Aix-Marseille & CNRS Marseille, France. jennifer.coull@univ-provence.fr

ABSTRACT
Accurate estimates of the time-to-contact (TTC) of approaching objects are crucial for survival. We used an ecologically valid driving simulation to compare and contrast the neural substrates of egocentric (head-on approach) and allocentric (lateral approach) TTC tasks in a fully factorial, event-related fMRI design. Compared to colour control tasks, both egocentric and allocentric TTC tasks activated left ventral premotor cortex/frontal operculum and inferior parietal cortex, the same areas that have previously been implicated in temporal attentional orienting. Despite differences in visual and cognitive demands, both TTC and temporal orienting paradigms encourage the use of temporally predictive information to guide behaviour, suggesting these areas may form a core network for temporal prediction. We also demonstrated that the temporal derivative of the perceptual index tau (tau-dot) held predictive value for making collision judgements and varied inversely with activity in primary visual cortex (V1). Specifically, V1 activity increased with the increasing likelihood of reporting a collision, suggesting top-down attentional modulation of early visual processing areas as a function of subjective collision. Finally, egocentric viewpoints provoked a response bias for reporting collisions, rather than no-collisions, reflecting increased caution for head-on approaches. Associated increases in SMA activity suggest motor preparation mechanisms were engaged, despite the perceptual nature of the task.

No MeSH data available.


Related in: MedlinePlus

Task structure and timing. (A) contact egocentric trial (B) colour allocentric trial. A briefly-presented cue (“contact” or “colour”) instructed subjects to make time-to-contact (TTC) or colour judgements for a forthcoming animation. During the animation subjects saw a car (the dark green foreground object in panel (A); the blue lower field object in panel (B)) approaching a wall (the light green object in panels (A) and (B)) either from the driver's point of view (egocentric condition (A)) or from a bird's eye view (allocentric condition (B)). The TTC task was to estimate potential contact between the car and wall while the colour task was to detect a possible colour-match between the car and wall. Subjects responded to “yes” or “no” response options presented on the screen, whose positions varied from trial to trial. Subjects made an index- or middle-finger right-handed button-press corresponding to whether their contact or colour-match judgement (yes/no) appeared on the left or right of the screen, respectively. The colour of the car changed gradually throughout its trajectory, while the colour of the wall remained constant (top panel). Exactly the same animations were used for the TTC and colour tasks. ISI = inter-stimulus interval; ITI = inter-trial interval.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2572212&req=5

Figure 1: Task structure and timing. (A) contact egocentric trial (B) colour allocentric trial. A briefly-presented cue (“contact” or “colour”) instructed subjects to make time-to-contact (TTC) or colour judgements for a forthcoming animation. During the animation subjects saw a car (the dark green foreground object in panel (A); the blue lower field object in panel (B)) approaching a wall (the light green object in panels (A) and (B)) either from the driver's point of view (egocentric condition (A)) or from a bird's eye view (allocentric condition (B)). The TTC task was to estimate potential contact between the car and wall while the colour task was to detect a possible colour-match between the car and wall. Subjects responded to “yes” or “no” response options presented on the screen, whose positions varied from trial to trial. Subjects made an index- or middle-finger right-handed button-press corresponding to whether their contact or colour-match judgement (yes/no) appeared on the left or right of the screen, respectively. The colour of the car changed gradually throughout its trajectory, while the colour of the wall remained constant (top panel). Exactly the same animations were used for the TTC and colour tasks. ISI = inter-stimulus interval; ITI = inter-trial interval.

Mentions: One of our aims was to use an ecologically-valid stimulus display. To this end, subjects viewed a short (2–3.5 s) animated simulation (Supplementary Material 1) of a car driving towards a wall (Figure 1). Virtual reality software was used to create animated simulations of the car's trajectory, in a 3-dimensional space, using the distance and movement parameters defined in Supplementary Material 2. Throughout the animation, the car decelerated at a constant rate but the animation ended before the car came to a complete stop. The colour of the car changed gradually throughout its trajectory, while the colour of the wall remained constant. The animation was shown either from the driver's point of view (egocentric, Figure 1A) or from a bird's eye view (allocentric, Figure 1B) and the same animations were used for both experimental (TTC) and control (COL) tasks. This resulted in a 2 × 2 factorial design yielding four conditions: TTCego, TTCallo, COLego, COLallo.


Using time-to-contact information to assess potential collision modulates both visual and temporal prediction networks.

Coull JT, Vidal F, Goulon C, Nazarian B, Craig C - Front Hum Neurosci (2008)

Task structure and timing. (A) contact egocentric trial (B) colour allocentric trial. A briefly-presented cue (“contact” or “colour”) instructed subjects to make time-to-contact (TTC) or colour judgements for a forthcoming animation. During the animation subjects saw a car (the dark green foreground object in panel (A); the blue lower field object in panel (B)) approaching a wall (the light green object in panels (A) and (B)) either from the driver's point of view (egocentric condition (A)) or from a bird's eye view (allocentric condition (B)). The TTC task was to estimate potential contact between the car and wall while the colour task was to detect a possible colour-match between the car and wall. Subjects responded to “yes” or “no” response options presented on the screen, whose positions varied from trial to trial. Subjects made an index- or middle-finger right-handed button-press corresponding to whether their contact or colour-match judgement (yes/no) appeared on the left or right of the screen, respectively. The colour of the car changed gradually throughout its trajectory, while the colour of the wall remained constant (top panel). Exactly the same animations were used for the TTC and colour tasks. ISI = inter-stimulus interval; ITI = inter-trial interval.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2572212&req=5

Figure 1: Task structure and timing. (A) contact egocentric trial (B) colour allocentric trial. A briefly-presented cue (“contact” or “colour”) instructed subjects to make time-to-contact (TTC) or colour judgements for a forthcoming animation. During the animation subjects saw a car (the dark green foreground object in panel (A); the blue lower field object in panel (B)) approaching a wall (the light green object in panels (A) and (B)) either from the driver's point of view (egocentric condition (A)) or from a bird's eye view (allocentric condition (B)). The TTC task was to estimate potential contact between the car and wall while the colour task was to detect a possible colour-match between the car and wall. Subjects responded to “yes” or “no” response options presented on the screen, whose positions varied from trial to trial. Subjects made an index- or middle-finger right-handed button-press corresponding to whether their contact or colour-match judgement (yes/no) appeared on the left or right of the screen, respectively. The colour of the car changed gradually throughout its trajectory, while the colour of the wall remained constant (top panel). Exactly the same animations were used for the TTC and colour tasks. ISI = inter-stimulus interval; ITI = inter-trial interval.
Mentions: One of our aims was to use an ecologically-valid stimulus display. To this end, subjects viewed a short (2–3.5 s) animated simulation (Supplementary Material 1) of a car driving towards a wall (Figure 1). Virtual reality software was used to create animated simulations of the car's trajectory, in a 3-dimensional space, using the distance and movement parameters defined in Supplementary Material 2. Throughout the animation, the car decelerated at a constant rate but the animation ended before the car came to a complete stop. The colour of the car changed gradually throughout its trajectory, while the colour of the wall remained constant. The animation was shown either from the driver's point of view (egocentric, Figure 1A) or from a bird's eye view (allocentric, Figure 1B) and the same animations were used for both experimental (TTC) and control (COL) tasks. This resulted in a 2 × 2 factorial design yielding four conditions: TTCego, TTCallo, COLego, COLallo.

Bottom Line: We also demonstrated that the temporal derivative of the perceptual index tau (tau-dot) held predictive value for making collision judgements and varied inversely with activity in primary visual cortex (V1).Finally, egocentric viewpoints provoked a response bias for reporting collisions, rather than no-collisions, reflecting increased caution for head-on approaches.Associated increases in SMA activity suggest motor preparation mechanisms were engaged, despite the perceptual nature of the task.

View Article: PubMed Central - PubMed

Affiliation: Laboratoire de Neurobiologie de la Cognition, Université Aix-Marseille & CNRS Marseille, France. jennifer.coull@univ-provence.fr

ABSTRACT
Accurate estimates of the time-to-contact (TTC) of approaching objects are crucial for survival. We used an ecologically valid driving simulation to compare and contrast the neural substrates of egocentric (head-on approach) and allocentric (lateral approach) TTC tasks in a fully factorial, event-related fMRI design. Compared to colour control tasks, both egocentric and allocentric TTC tasks activated left ventral premotor cortex/frontal operculum and inferior parietal cortex, the same areas that have previously been implicated in temporal attentional orienting. Despite differences in visual and cognitive demands, both TTC and temporal orienting paradigms encourage the use of temporally predictive information to guide behaviour, suggesting these areas may form a core network for temporal prediction. We also demonstrated that the temporal derivative of the perceptual index tau (tau-dot) held predictive value for making collision judgements and varied inversely with activity in primary visual cortex (V1). Specifically, V1 activity increased with the increasing likelihood of reporting a collision, suggesting top-down attentional modulation of early visual processing areas as a function of subjective collision. Finally, egocentric viewpoints provoked a response bias for reporting collisions, rather than no-collisions, reflecting increased caution for head-on approaches. Associated increases in SMA activity suggest motor preparation mechanisms were engaged, despite the perceptual nature of the task.

No MeSH data available.


Related in: MedlinePlus