Limits...
A Generalizable Brain-Computer Interface (BCI) Using Machine Learning for Feature Discovery.

Nurse ES, Karoly PJ, Grayden DB, Freestone DR - PLoS ONE (2015)

Bottom Line: The classifier does not use extensive a-priori information, resulting in reduced reliance on highly specific domain knowledge.Instead of pre-defining features, the time-domain signal is input to a population of multi-layer perceptrons (MLPs) in order to perform a stochastic search for the best structure.Our new approach has been shown to give accurate results across different motor tasks and signal types as well as between subjects.

View Article: PubMed Central - PubMed

Affiliation: NeuroEngineering Laboratory, Department of Electrical & Electronic Engineering, The University of Melbourne, Melbourne, VIC, Australia, 3010; Centre for Neural Engineering, The University of Melbourne, Melbourne, VIC, Australia, 3010.

ABSTRACT
This work describes a generalized method for classifying motor-related neural signals for a brain-computer interface (BCI), based on a stochastic machine learning method. The method differs from the various feature extraction and selection techniques employed in many other BCI systems. The classifier does not use extensive a-priori information, resulting in reduced reliance on highly specific domain knowledge. Instead of pre-defining features, the time-domain signal is input to a population of multi-layer perceptrons (MLPs) in order to perform a stochastic search for the best structure. The results showed that the average performance of the new algorithm outperformed other published methods using the Berlin BCI IV (2008) competition dataset and was comparable to the best results in the Berlin BCI II (2002-3) competition dataset. The new method was also applied to electroencephalography (EEG) data recorded from five subjects undertaking a hand squeeze task and demonstrated high levels of accuracy with a mean classification accuracy of 78.9% after five-fold cross-validation. Our new approach has been shown to give accurate results across different motor tasks and signal types as well as between subjects.

No MeSH data available.


Related in: MedlinePlus

Illustration of method.Signals acquired from the brain-computer interface (BCI) user are initially used to train an artificial neural network (ANN). The number of hidden layers and neurons is determined using a genetic algorithm (GA). At the termination of the GA, the network found to have the fittest structure is used in the BCI. The ANN is a fully-interconnected multi-layer perceptron. The input layer consists of every time-point of each channel. Hence, each neuron in the first hidden layer is able to generate features based on both spatial and temporal inferences. The hidden layers then feed into the output neurons, which determine the classifier output.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4482677&req=5

pone.0131328.g001: Illustration of method.Signals acquired from the brain-computer interface (BCI) user are initially used to train an artificial neural network (ANN). The number of hidden layers and neurons is determined using a genetic algorithm (GA). At the termination of the GA, the network found to have the fittest structure is used in the BCI. The ANN is a fully-interconnected multi-layer perceptron. The input layer consists of every time-point of each channel. Hence, each neuron in the first hidden layer is able to generate features based on both spatial and temporal inferences. The hidden layers then feed into the output neurons, which determine the classifier output.

Mentions: This section describes the three key components of our method; artificial neural networks (ANNs), the genetic algorithm (GA), and neural data acquisition. Fig 1 is a schematic overview of the algorithm used to identify features and perform classification of neural data. Table 1 lists the parameters used to initialize and run the algorithm. Fig 1 shows that within the GA, a population of ANNs performed feature extraction and classification on a window of time-series neural data. That is, the input to the first layer is the raw time-series EEG. The features that are extracted by the ANNs are dictated by the weights of the neurons in each network, which are updated via backpropagation. The weights for every time point of the input can be thought of as a filter that is tuned during network training to find frequency bands and electrodes containing task-related information. The possible feature space is governed by the number of samples in the window, the connectivity structure of the network, the number of layers, and the number of neurons in each layer.


A Generalizable Brain-Computer Interface (BCI) Using Machine Learning for Feature Discovery.

Nurse ES, Karoly PJ, Grayden DB, Freestone DR - PLoS ONE (2015)

Illustration of method.Signals acquired from the brain-computer interface (BCI) user are initially used to train an artificial neural network (ANN). The number of hidden layers and neurons is determined using a genetic algorithm (GA). At the termination of the GA, the network found to have the fittest structure is used in the BCI. The ANN is a fully-interconnected multi-layer perceptron. The input layer consists of every time-point of each channel. Hence, each neuron in the first hidden layer is able to generate features based on both spatial and temporal inferences. The hidden layers then feed into the output neurons, which determine the classifier output.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4482677&req=5

pone.0131328.g001: Illustration of method.Signals acquired from the brain-computer interface (BCI) user are initially used to train an artificial neural network (ANN). The number of hidden layers and neurons is determined using a genetic algorithm (GA). At the termination of the GA, the network found to have the fittest structure is used in the BCI. The ANN is a fully-interconnected multi-layer perceptron. The input layer consists of every time-point of each channel. Hence, each neuron in the first hidden layer is able to generate features based on both spatial and temporal inferences. The hidden layers then feed into the output neurons, which determine the classifier output.
Mentions: This section describes the three key components of our method; artificial neural networks (ANNs), the genetic algorithm (GA), and neural data acquisition. Fig 1 is a schematic overview of the algorithm used to identify features and perform classification of neural data. Table 1 lists the parameters used to initialize and run the algorithm. Fig 1 shows that within the GA, a population of ANNs performed feature extraction and classification on a window of time-series neural data. That is, the input to the first layer is the raw time-series EEG. The features that are extracted by the ANNs are dictated by the weights of the neurons in each network, which are updated via backpropagation. The weights for every time point of the input can be thought of as a filter that is tuned during network training to find frequency bands and electrodes containing task-related information. The possible feature space is governed by the number of samples in the window, the connectivity structure of the network, the number of layers, and the number of neurons in each layer.

Bottom Line: The classifier does not use extensive a-priori information, resulting in reduced reliance on highly specific domain knowledge.Instead of pre-defining features, the time-domain signal is input to a population of multi-layer perceptrons (MLPs) in order to perform a stochastic search for the best structure.Our new approach has been shown to give accurate results across different motor tasks and signal types as well as between subjects.

View Article: PubMed Central - PubMed

Affiliation: NeuroEngineering Laboratory, Department of Electrical & Electronic Engineering, The University of Melbourne, Melbourne, VIC, Australia, 3010; Centre for Neural Engineering, The University of Melbourne, Melbourne, VIC, Australia, 3010.

ABSTRACT
This work describes a generalized method for classifying motor-related neural signals for a brain-computer interface (BCI), based on a stochastic machine learning method. The method differs from the various feature extraction and selection techniques employed in many other BCI systems. The classifier does not use extensive a-priori information, resulting in reduced reliance on highly specific domain knowledge. Instead of pre-defining features, the time-domain signal is input to a population of multi-layer perceptrons (MLPs) in order to perform a stochastic search for the best structure. The results showed that the average performance of the new algorithm outperformed other published methods using the Berlin BCI IV (2008) competition dataset and was comparable to the best results in the Berlin BCI II (2002-3) competition dataset. The new method was also applied to electroencephalography (EEG) data recorded from five subjects undertaking a hand squeeze task and demonstrated high levels of accuracy with a mean classification accuracy of 78.9% after five-fold cross-validation. Our new approach has been shown to give accurate results across different motor tasks and signal types as well as between subjects.

No MeSH data available.


Related in: MedlinePlus