Limits...
A goal direction signal in the human entorhinal/subicular region.

Chadwick MJ, Jolly AE, Amos DP, Hassabis D, Spiers HJ - Curr. Biol. (2014)

Bottom Line: Navigating to a safe place, such as a home or nest, is a fundamental behavior for all complex animals.We applied multivoxel pattern analysis to these data and found that the human entorhinal/subicular region contains a neural representation of intended goal direction.Our data further revealed that the strength of direction information predicts performance.

View Article: PubMed Central - PubMed

Affiliation: Institute of Behavioural Neuroscience, Department of Experimental Psychology, Division of Psychology and Language Sciences, University College London, 26 Bedford Way, London WC1H 0AP, UK.

Show MeSH

Related in: MedlinePlus

The Experimental Design(A) The layout of the virtual environment from an elevated view. The four key objects are visible, as are two of the distal scenes. Note that the participants never viewed the environment from this view but instead could only explore from ground level.(B) The same environment from an overhead, schematic view (not to scale). The four distal walls have been tilted so that they are visible from above. For clarity, we arbitrarily refer to the four cardinal directions as NSEW, but note that that they were never referred to as such during the actual experiment.(C) The goal direction task on two consecutive trials. The task was to judge the direction of the goal from the start location, and this could be required in one of two directional coordinate systems: environment-centered (geocentric) or body-centered (egocentric). For the geocentric question, participants were asked to decide which of the four distal scenes the goal location was toward from their start location (i.e., if they were to draw an arrow between the start and goal locations, which scene would it be pointing toward?). Although the focus of this study was on geocentric direction coding, we also included an egocentric question, in which the participant was asked to decide whether the goal location was located to the left, right, forward, or backward from the start location. Both the geocentric and egocentric questions were asked in every trial, with the order randomized. The four letters underneath each scene represent the four possible responses: in the geocentric task, these were desert (D), sea (S), mountain (M), or forest (F), which acted as semantic labels for the four cardinal directions. In the egocentric task, these were forward (F), backward (B), left (L), or right (R). The mapping between the four responses and the four buttons was partially randomized.
© Copyright Policy - CC BY
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4291144&req=5

fig1: The Experimental Design(A) The layout of the virtual environment from an elevated view. The four key objects are visible, as are two of the distal scenes. Note that the participants never viewed the environment from this view but instead could only explore from ground level.(B) The same environment from an overhead, schematic view (not to scale). The four distal walls have been tilted so that they are visible from above. For clarity, we arbitrarily refer to the four cardinal directions as NSEW, but note that that they were never referred to as such during the actual experiment.(C) The goal direction task on two consecutive trials. The task was to judge the direction of the goal from the start location, and this could be required in one of two directional coordinate systems: environment-centered (geocentric) or body-centered (egocentric). For the geocentric question, participants were asked to decide which of the four distal scenes the goal location was toward from their start location (i.e., if they were to draw an arrow between the start and goal locations, which scene would it be pointing toward?). Although the focus of this study was on geocentric direction coding, we also included an egocentric question, in which the participant was asked to decide whether the goal location was located to the left, right, forward, or backward from the start location. Both the geocentric and egocentric questions were asked in every trial, with the order randomized. The four letters underneath each scene represent the four possible responses: in the geocentric task, these were desert (D), sea (S), mountain (M), or forest (F), which acted as semantic labels for the four cardinal directions. In the egocentric task, these were forward (F), backward (B), left (L), or right (R). The mapping between the four responses and the four buttons was partially randomized.

Mentions: We applied multivoxel pattern analysis (MVPA) to fMRI data collected while participants (n = 16) made a series of goal direction judgments. All subjects gave informed written consent in accordance with the local research ethics committee. MVPA has been shown to be sensitive to specific neural representations in various domains [3–5], including place coding [16], scene coding [17–19], and facing direction [20–22]. It is therefore plausible that this approach may be able to detect neural representations related to simulation of future goal heading. Prior to scanning, participants learned the spatial layout of a simple virtual environment (Figures 1A and 1B) by freely moving around within it. The environment consisted of four objects placed at the corners of four paths arranged in a square. Each of the four distant edges of the environment consisted of a distinct scene in order to clearly differentiate the four cardinal directions. During scanning, participants were required to make goal direction decisions based on their memory of this environment (Figure 1C). Very high performance levels (mean 97% accuracy) indicated that participants were successfully able to engage goal direction systems (for more details on experimental design and methods, see Supplemental Information available online).


A goal direction signal in the human entorhinal/subicular region.

Chadwick MJ, Jolly AE, Amos DP, Hassabis D, Spiers HJ - Curr. Biol. (2014)

The Experimental Design(A) The layout of the virtual environment from an elevated view. The four key objects are visible, as are two of the distal scenes. Note that the participants never viewed the environment from this view but instead could only explore from ground level.(B) The same environment from an overhead, schematic view (not to scale). The four distal walls have been tilted so that they are visible from above. For clarity, we arbitrarily refer to the four cardinal directions as NSEW, but note that that they were never referred to as such during the actual experiment.(C) The goal direction task on two consecutive trials. The task was to judge the direction of the goal from the start location, and this could be required in one of two directional coordinate systems: environment-centered (geocentric) or body-centered (egocentric). For the geocentric question, participants were asked to decide which of the four distal scenes the goal location was toward from their start location (i.e., if they were to draw an arrow between the start and goal locations, which scene would it be pointing toward?). Although the focus of this study was on geocentric direction coding, we also included an egocentric question, in which the participant was asked to decide whether the goal location was located to the left, right, forward, or backward from the start location. Both the geocentric and egocentric questions were asked in every trial, with the order randomized. The four letters underneath each scene represent the four possible responses: in the geocentric task, these were desert (D), sea (S), mountain (M), or forest (F), which acted as semantic labels for the four cardinal directions. In the egocentric task, these were forward (F), backward (B), left (L), or right (R). The mapping between the four responses and the four buttons was partially randomized.
© Copyright Policy - CC BY
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4291144&req=5

fig1: The Experimental Design(A) The layout of the virtual environment from an elevated view. The four key objects are visible, as are two of the distal scenes. Note that the participants never viewed the environment from this view but instead could only explore from ground level.(B) The same environment from an overhead, schematic view (not to scale). The four distal walls have been tilted so that they are visible from above. For clarity, we arbitrarily refer to the four cardinal directions as NSEW, but note that that they were never referred to as such during the actual experiment.(C) The goal direction task on two consecutive trials. The task was to judge the direction of the goal from the start location, and this could be required in one of two directional coordinate systems: environment-centered (geocentric) or body-centered (egocentric). For the geocentric question, participants were asked to decide which of the four distal scenes the goal location was toward from their start location (i.e., if they were to draw an arrow between the start and goal locations, which scene would it be pointing toward?). Although the focus of this study was on geocentric direction coding, we also included an egocentric question, in which the participant was asked to decide whether the goal location was located to the left, right, forward, or backward from the start location. Both the geocentric and egocentric questions were asked in every trial, with the order randomized. The four letters underneath each scene represent the four possible responses: in the geocentric task, these were desert (D), sea (S), mountain (M), or forest (F), which acted as semantic labels for the four cardinal directions. In the egocentric task, these were forward (F), backward (B), left (L), or right (R). The mapping between the four responses and the four buttons was partially randomized.
Mentions: We applied multivoxel pattern analysis (MVPA) to fMRI data collected while participants (n = 16) made a series of goal direction judgments. All subjects gave informed written consent in accordance with the local research ethics committee. MVPA has been shown to be sensitive to specific neural representations in various domains [3–5], including place coding [16], scene coding [17–19], and facing direction [20–22]. It is therefore plausible that this approach may be able to detect neural representations related to simulation of future goal heading. Prior to scanning, participants learned the spatial layout of a simple virtual environment (Figures 1A and 1B) by freely moving around within it. The environment consisted of four objects placed at the corners of four paths arranged in a square. Each of the four distant edges of the environment consisted of a distinct scene in order to clearly differentiate the four cardinal directions. During scanning, participants were required to make goal direction decisions based on their memory of this environment (Figure 1C). Very high performance levels (mean 97% accuracy) indicated that participants were successfully able to engage goal direction systems (for more details on experimental design and methods, see Supplemental Information available online).

Bottom Line: Navigating to a safe place, such as a home or nest, is a fundamental behavior for all complex animals.We applied multivoxel pattern analysis to these data and found that the human entorhinal/subicular region contains a neural representation of intended goal direction.Our data further revealed that the strength of direction information predicts performance.

View Article: PubMed Central - PubMed

Affiliation: Institute of Behavioural Neuroscience, Department of Experimental Psychology, Division of Psychology and Language Sciences, University College London, 26 Bedford Way, London WC1H 0AP, UK.

Show MeSH
Related in: MedlinePlus