Limits...
Learning to control a brain-machine interface for reaching and grasping by primates.

Carmena JM, Lebedev MA, Crist RE, O'Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS, Nicolelis MA - PLoS Biol. (2003)

Bottom Line: Continuous BMIc operation by monkeys led to significant improvements in both model predictions and behavioral performance.Using visual feedback, monkeys succeeded in producing robot reach-and-grasp movements even when their arms did not move.Learning to operate the BMIc was paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMIc were incorporated into motor and sensory cortical representations.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, Duke University, Durham, North Carolina, USA.

ABSTRACT
Reaching and grasping in primates depend on the coordination of neural activity in large frontoparietal ensembles. Here we demonstrate that primates can learn to reach and grasp virtual objects by controlling a robot arm through a closed-loop brain-machine interface (BMIc) that uses multiple mathematical models to extract several motor parameters (i.e., hand position, velocity, gripping force, and the EMGs of multiple arm muscles) from the electrical activity of frontoparietal neuronal ensembles. As single neurons typically contribute to the encoding of several motor parameters, we observed that high BMIc accuracy required recording from large neuronal ensembles. Continuous BMIc operation by monkeys led to significant improvements in both model predictions and behavioral performance. Using visual feedback, monkeys succeeded in producing robot reach-and-grasp movements even when their arms did not move. Learning to operate the BMIc was paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMIc were incorporated into motor and sensory cortical representations.

Show MeSH
Performance of Linear Models in Predicting Multiple Parameters of Arm Movements, Gripping Force, and EMG from the Activity Frontoparietal Neuronal Ensembles Recorded in Pole Control(A) Motor parameters (blue) and their prediction using linear models (red). From top to bottom: Hand position (HPx, HPy) and velocity (HVx, HVy) during execution of task 1 and gripping force (GF) during execution of tasks 2 and 1.(B) EMGs (blue) recorded in task 1 and their prediction (red).(C) Contribution of neurons from the same ensemble to predictions of hand position (top), velocity (middle), and gripping force (bottom). Contributions were measured as correlation coefficients (R) between the recorded motor parameters and their values predicted by the linear model. The color bar at the bottom indicates cortical areas where the neurons were located. Each neuron contributed to prediction of multiple parameters of movements, and each area contained information about all parameters.(D–F) Contribution of different cortical areas to model predictions of hand position, velocity (task 1), and gripping force (task 2). For each area, ND curves represent the average prediction accuracy (R2) as a function of number of neurons needed to attain it. Contributions of each cortical area vary for different parameters. Typically, more than 30 randomly sampled neurons were required for an acceptable level of prediction.(G–I) Comparison of the contribution of single units (blue) and multiunits (red) to predictions of hand position, velocity, and gripping force. Single units and multiunits were taken from all cortical areas. Single units' contribution exceeded that of multiunits by approximately 20%.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC261882&req=5

pbio.0000042-g002: Performance of Linear Models in Predicting Multiple Parameters of Arm Movements, Gripping Force, and EMG from the Activity Frontoparietal Neuronal Ensembles Recorded in Pole Control(A) Motor parameters (blue) and their prediction using linear models (red). From top to bottom: Hand position (HPx, HPy) and velocity (HVx, HVy) during execution of task 1 and gripping force (GF) during execution of tasks 2 and 1.(B) EMGs (blue) recorded in task 1 and their prediction (red).(C) Contribution of neurons from the same ensemble to predictions of hand position (top), velocity (middle), and gripping force (bottom). Contributions were measured as correlation coefficients (R) between the recorded motor parameters and their values predicted by the linear model. The color bar at the bottom indicates cortical areas where the neurons were located. Each neuron contributed to prediction of multiple parameters of movements, and each area contained information about all parameters.(D–F) Contribution of different cortical areas to model predictions of hand position, velocity (task 1), and gripping force (task 2). For each area, ND curves represent the average prediction accuracy (R2) as a function of number of neurons needed to attain it. Contributions of each cortical area vary for different parameters. Typically, more than 30 randomly sampled neurons were required for an acceptable level of prediction.(G–I) Comparison of the contribution of single units (blue) and multiunits (red) to predictions of hand position, velocity, and gripping force. Single units and multiunits were taken from all cortical areas. Single units' contribution exceeded that of multiunits by approximately 20%.

Mentions: Throughout learning of all three behavioral tasks, populations of neurons distributed in multiple frontal and parietal cortical areas exhibited task-related modulations of their firing rates. Using multiple linear models running in parallel, several motor signals were extracted from those modulations. To evaluate the performance of the models in extracting different motor parameters, the models were first trained using 15 min of pole control data and then subsequent data were predicted. Figure 2A shows representative 1-min records of such predictions of hand position (HPx, HPy), hand velocity (HVx, HVy), and gripping force (GF). Figure 2B shows the model prediction of EMG activity. In well-trained animals, the linear models accounted for up to 85% of the variance of hand position, 80% of hand velocity, 95% of gripping force, and 61% of multiple EMG activity. These results show that elaborate hand movements, such as the ones required to solve task 3, could be predicted from brain activity using a BMIc with the simultaneous application of multiple linear models.


Learning to control a brain-machine interface for reaching and grasping by primates.

Carmena JM, Lebedev MA, Crist RE, O'Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS, Nicolelis MA - PLoS Biol. (2003)

Performance of Linear Models in Predicting Multiple Parameters of Arm Movements, Gripping Force, and EMG from the Activity Frontoparietal Neuronal Ensembles Recorded in Pole Control(A) Motor parameters (blue) and their prediction using linear models (red). From top to bottom: Hand position (HPx, HPy) and velocity (HVx, HVy) during execution of task 1 and gripping force (GF) during execution of tasks 2 and 1.(B) EMGs (blue) recorded in task 1 and their prediction (red).(C) Contribution of neurons from the same ensemble to predictions of hand position (top), velocity (middle), and gripping force (bottom). Contributions were measured as correlation coefficients (R) between the recorded motor parameters and their values predicted by the linear model. The color bar at the bottom indicates cortical areas where the neurons were located. Each neuron contributed to prediction of multiple parameters of movements, and each area contained information about all parameters.(D–F) Contribution of different cortical areas to model predictions of hand position, velocity (task 1), and gripping force (task 2). For each area, ND curves represent the average prediction accuracy (R2) as a function of number of neurons needed to attain it. Contributions of each cortical area vary for different parameters. Typically, more than 30 randomly sampled neurons were required for an acceptable level of prediction.(G–I) Comparison of the contribution of single units (blue) and multiunits (red) to predictions of hand position, velocity, and gripping force. Single units and multiunits were taken from all cortical areas. Single units' contribution exceeded that of multiunits by approximately 20%.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC261882&req=5

pbio.0000042-g002: Performance of Linear Models in Predicting Multiple Parameters of Arm Movements, Gripping Force, and EMG from the Activity Frontoparietal Neuronal Ensembles Recorded in Pole Control(A) Motor parameters (blue) and their prediction using linear models (red). From top to bottom: Hand position (HPx, HPy) and velocity (HVx, HVy) during execution of task 1 and gripping force (GF) during execution of tasks 2 and 1.(B) EMGs (blue) recorded in task 1 and their prediction (red).(C) Contribution of neurons from the same ensemble to predictions of hand position (top), velocity (middle), and gripping force (bottom). Contributions were measured as correlation coefficients (R) between the recorded motor parameters and their values predicted by the linear model. The color bar at the bottom indicates cortical areas where the neurons were located. Each neuron contributed to prediction of multiple parameters of movements, and each area contained information about all parameters.(D–F) Contribution of different cortical areas to model predictions of hand position, velocity (task 1), and gripping force (task 2). For each area, ND curves represent the average prediction accuracy (R2) as a function of number of neurons needed to attain it. Contributions of each cortical area vary for different parameters. Typically, more than 30 randomly sampled neurons were required for an acceptable level of prediction.(G–I) Comparison of the contribution of single units (blue) and multiunits (red) to predictions of hand position, velocity, and gripping force. Single units and multiunits were taken from all cortical areas. Single units' contribution exceeded that of multiunits by approximately 20%.
Mentions: Throughout learning of all three behavioral tasks, populations of neurons distributed in multiple frontal and parietal cortical areas exhibited task-related modulations of their firing rates. Using multiple linear models running in parallel, several motor signals were extracted from those modulations. To evaluate the performance of the models in extracting different motor parameters, the models were first trained using 15 min of pole control data and then subsequent data were predicted. Figure 2A shows representative 1-min records of such predictions of hand position (HPx, HPy), hand velocity (HVx, HVy), and gripping force (GF). Figure 2B shows the model prediction of EMG activity. In well-trained animals, the linear models accounted for up to 85% of the variance of hand position, 80% of hand velocity, 95% of gripping force, and 61% of multiple EMG activity. These results show that elaborate hand movements, such as the ones required to solve task 3, could be predicted from brain activity using a BMIc with the simultaneous application of multiple linear models.

Bottom Line: Continuous BMIc operation by monkeys led to significant improvements in both model predictions and behavioral performance.Using visual feedback, monkeys succeeded in producing robot reach-and-grasp movements even when their arms did not move.Learning to operate the BMIc was paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMIc were incorporated into motor and sensory cortical representations.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, Duke University, Durham, North Carolina, USA.

ABSTRACT
Reaching and grasping in primates depend on the coordination of neural activity in large frontoparietal ensembles. Here we demonstrate that primates can learn to reach and grasp virtual objects by controlling a robot arm through a closed-loop brain-machine interface (BMIc) that uses multiple mathematical models to extract several motor parameters (i.e., hand position, velocity, gripping force, and the EMGs of multiple arm muscles) from the electrical activity of frontoparietal neuronal ensembles. As single neurons typically contribute to the encoding of several motor parameters, we observed that high BMIc accuracy required recording from large neuronal ensembles. Continuous BMIc operation by monkeys led to significant improvements in both model predictions and behavioral performance. Using visual feedback, monkeys succeeded in producing robot reach-and-grasp movements even when their arms did not move. Learning to operate the BMIc was paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMIc were incorporated into motor and sensory cortical representations.

Show MeSH