Limits...
Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation.

Sajad A, Sadeh M, Yan X, Wang H, Crawford JD - eNeuro (2016)

Bottom Line: We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T-G models and dividing the task into discrete steps through time.We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T-G codes that approach, but do not reach, G.This was followed by a final discrete transition from these intermediate T-G delay codes to a "pure" G code in movement cells without delay activity.

View Article: PubMed Central - HTML - PubMed

Affiliation: Centre for Vision Research, York University, Toronto, Ontario M3J 1P3, Canada; Neuroscience Graduate Diploma Program, York University, Toronto, Ontario M3J 1P3, Canada; Department of Biology, York University, Toronto, Ontario M3J 1P3, Canada.

ABSTRACT
The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T-G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T-G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T-G delay codes to a "pure" G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory-memory-motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.

No MeSH data available.


Related in: MedlinePlus

Distribution of best-fit models across the T–G continuum for the VM population through five time steps through visual, delay, and movement responses. A shows the distribution of best-fits for VM neurons for early-visual (1st time step from the time-normalized activity profile), early-delay (4th time step), mid-delay (9th time step), late-delay (13th time step), and perimovement (15th time step) intervals. Only neurons with significant spatial tuning are considered. The number of neurons contributing to each distribution is indicated on each panel (the number in brackets also includes best-fits outside of the presented range). B plots the spatial code (i.e, value of the fit to the T–G data) at each of the delay intervals (y-axis), versus the spatial code at the perimovement period (red dots). Here, only the 21 neurons that contributed to all five panels in A were plotted. Note the trend (from the early- to mid- to late-delay periods) for the data points to migrate toward the line of unity (i.e., toward their movement fits).
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4829728&req=5

Figure 7: Distribution of best-fit models across the T–G continuum for the VM population through five time steps through visual, delay, and movement responses. A shows the distribution of best-fits for VM neurons for early-visual (1st time step from the time-normalized activity profile), early-delay (4th time step), mid-delay (9th time step), late-delay (13th time step), and perimovement (15th time step) intervals. Only neurons with significant spatial tuning are considered. The number of neurons contributing to each distribution is indicated on each panel (the number in brackets also includes best-fits outside of the presented range). B plots the spatial code (i.e, value of the fit to the T–G data) at each of the delay intervals (y-axis), versus the spatial code at the perimovement period (red dots). Here, only the 21 neurons that contributed to all five panels in A were plotted. Note the trend (from the early- to mid- to late-delay periods) for the data points to migrate toward the line of unity (i.e., toward their movement fits).

Mentions: Since most theoretical studies suggest that it is neural populations, not individual neurons, that matter most for behavior (Pouget and Snyder, 2000; Blohm et al., 2009), the results presented here focus mainly on our T–G analysis of our entire population of neurons as well as on several subpopulations (V, VM, DM, M). The overall population coding preference across the T–G continuum (Figs. 4E, 5B, 6B, 7, 8B, 9B, continuous trend lines) at any given time step was defined as the mean of the fits made to individual neuron data. Since the distribution of spatial code within different neuronal subpopulations did not exhibit a normal distribution, we used nonparametric statistical tests to make comparisons among data across the population, as well as the regression analyses presented in Results for VM and DM neurons.


Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation.

Sajad A, Sadeh M, Yan X, Wang H, Crawford JD - eNeuro (2016)

Distribution of best-fit models across the T–G continuum for the VM population through five time steps through visual, delay, and movement responses. A shows the distribution of best-fits for VM neurons for early-visual (1st time step from the time-normalized activity profile), early-delay (4th time step), mid-delay (9th time step), late-delay (13th time step), and perimovement (15th time step) intervals. Only neurons with significant spatial tuning are considered. The number of neurons contributing to each distribution is indicated on each panel (the number in brackets also includes best-fits outside of the presented range). B plots the spatial code (i.e, value of the fit to the T–G data) at each of the delay intervals (y-axis), versus the spatial code at the perimovement period (red dots). Here, only the 21 neurons that contributed to all five panels in A were plotted. Note the trend (from the early- to mid- to late-delay periods) for the data points to migrate toward the line of unity (i.e., toward their movement fits).
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4829728&req=5

Figure 7: Distribution of best-fit models across the T–G continuum for the VM population through five time steps through visual, delay, and movement responses. A shows the distribution of best-fits for VM neurons for early-visual (1st time step from the time-normalized activity profile), early-delay (4th time step), mid-delay (9th time step), late-delay (13th time step), and perimovement (15th time step) intervals. Only neurons with significant spatial tuning are considered. The number of neurons contributing to each distribution is indicated on each panel (the number in brackets also includes best-fits outside of the presented range). B plots the spatial code (i.e, value of the fit to the T–G data) at each of the delay intervals (y-axis), versus the spatial code at the perimovement period (red dots). Here, only the 21 neurons that contributed to all five panels in A were plotted. Note the trend (from the early- to mid- to late-delay periods) for the data points to migrate toward the line of unity (i.e., toward their movement fits).
Mentions: Since most theoretical studies suggest that it is neural populations, not individual neurons, that matter most for behavior (Pouget and Snyder, 2000; Blohm et al., 2009), the results presented here focus mainly on our T–G analysis of our entire population of neurons as well as on several subpopulations (V, VM, DM, M). The overall population coding preference across the T–G continuum (Figs. 4E, 5B, 6B, 7, 8B, 9B, continuous trend lines) at any given time step was defined as the mean of the fits made to individual neuron data. Since the distribution of spatial code within different neuronal subpopulations did not exhibit a normal distribution, we used nonparametric statistical tests to make comparisons among data across the population, as well as the regression analyses presented in Results for VM and DM neurons.

Bottom Line: We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T-G models and dividing the task into discrete steps through time.We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T-G codes that approach, but do not reach, G.This was followed by a final discrete transition from these intermediate T-G delay codes to a "pure" G code in movement cells without delay activity.

View Article: PubMed Central - HTML - PubMed

Affiliation: Centre for Vision Research, York University, Toronto, Ontario M3J 1P3, Canada; Neuroscience Graduate Diploma Program, York University, Toronto, Ontario M3J 1P3, Canada; Department of Biology, York University, Toronto, Ontario M3J 1P3, Canada.

ABSTRACT
The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T-G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T-G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T-G delay codes to a "pure" G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory-memory-motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.

No MeSH data available.


Related in: MedlinePlus