Limits...
A Generalizable Brain-Computer Interface (BCI) Using Machine Learning for Feature Discovery.

Nurse ES, Karoly PJ, Grayden DB, Freestone DR - PLoS ONE (2015)

Bottom Line: The classifier does not use extensive a-priori information, resulting in reduced reliance on highly specific domain knowledge.Instead of pre-defining features, the time-domain signal is input to a population of multi-layer perceptrons (MLPs) in order to perform a stochastic search for the best structure.Our new approach has been shown to give accurate results across different motor tasks and signal types as well as between subjects.

View Article: PubMed Central - PubMed

Affiliation: NeuroEngineering Laboratory, Department of Electrical & Electronic Engineering, The University of Melbourne, Melbourne, VIC, Australia, 3010; Centre for Neural Engineering, The University of Melbourne, Melbourne, VIC, Australia, 3010.

ABSTRACT
This work describes a generalized method for classifying motor-related neural signals for a brain-computer interface (BCI), based on a stochastic machine learning method. The method differs from the various feature extraction and selection techniques employed in many other BCI systems. The classifier does not use extensive a-priori information, resulting in reduced reliance on highly specific domain knowledge. Instead of pre-defining features, the time-domain signal is input to a population of multi-layer perceptrons (MLPs) in order to perform a stochastic search for the best structure. The results showed that the average performance of the new algorithm outperformed other published methods using the Berlin BCI IV (2008) competition dataset and was comparable to the best results in the Berlin BCI II (2002-3) competition dataset. The new method was also applied to electroencephalography (EEG) data recorded from five subjects undertaking a hand squeeze task and demonstrated high levels of accuracy with a mean classification accuracy of 78.9% after five-fold cross-validation. Our new approach has been shown to give accurate results across different motor tasks and signal types as well as between subjects.

No MeSH data available.


Related in: MedlinePlus

Number of layers of the artificial neural networks (ANNs).The total number of layers in the final ANN classifier after each fold of cross-validation for every participant. A dot is placed in the relevant row for each classifier, where the number of hidden layers is two less than the total number of layers, since there is always an input and output layer. A Results for two-class dataset. B Results for three-class dataset. The networks trained on the three-class dataset have a higher median number of layers than the networks trained on two-class data (Wilcoxon rank sum test, p = 0.0094).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4482677&req=5

pone.0131328.g008: Number of layers of the artificial neural networks (ANNs).The total number of layers in the final ANN classifier after each fold of cross-validation for every participant. A dot is placed in the relevant row for each classifier, where the number of hidden layers is two less than the total number of layers, since there is always an input and output layer. A Results for two-class dataset. B Results for three-class dataset. The networks trained on the three-class dataset have a higher median number of layers than the networks trained on two-class data (Wilcoxon rank sum test, p = 0.0094).

Mentions: Fig 8 shows the number of layers in the final ANN for each participant over the five-fold cross validation. It can be seen that there was no clear pattern to the number of hidden layers that were selected; however, there was a tendency for more hidden layers to be used in the three-class problem compared with the two-class problem (Wilcoxon rank sum test, p = 0.0094). The number of neurons for each hidden layer are presented in S1 Appendix. The results do not demonstrate a consistent pattern in the number of neurons chosen for a given participant.


A Generalizable Brain-Computer Interface (BCI) Using Machine Learning for Feature Discovery.

Nurse ES, Karoly PJ, Grayden DB, Freestone DR - PLoS ONE (2015)

Number of layers of the artificial neural networks (ANNs).The total number of layers in the final ANN classifier after each fold of cross-validation for every participant. A dot is placed in the relevant row for each classifier, where the number of hidden layers is two less than the total number of layers, since there is always an input and output layer. A Results for two-class dataset. B Results for three-class dataset. The networks trained on the three-class dataset have a higher median number of layers than the networks trained on two-class data (Wilcoxon rank sum test, p = 0.0094).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4482677&req=5

pone.0131328.g008: Number of layers of the artificial neural networks (ANNs).The total number of layers in the final ANN classifier after each fold of cross-validation for every participant. A dot is placed in the relevant row for each classifier, where the number of hidden layers is two less than the total number of layers, since there is always an input and output layer. A Results for two-class dataset. B Results for three-class dataset. The networks trained on the three-class dataset have a higher median number of layers than the networks trained on two-class data (Wilcoxon rank sum test, p = 0.0094).
Mentions: Fig 8 shows the number of layers in the final ANN for each participant over the five-fold cross validation. It can be seen that there was no clear pattern to the number of hidden layers that were selected; however, there was a tendency for more hidden layers to be used in the three-class problem compared with the two-class problem (Wilcoxon rank sum test, p = 0.0094). The number of neurons for each hidden layer are presented in S1 Appendix. The results do not demonstrate a consistent pattern in the number of neurons chosen for a given participant.

Bottom Line: The classifier does not use extensive a-priori information, resulting in reduced reliance on highly specific domain knowledge.Instead of pre-defining features, the time-domain signal is input to a population of multi-layer perceptrons (MLPs) in order to perform a stochastic search for the best structure.Our new approach has been shown to give accurate results across different motor tasks and signal types as well as between subjects.

View Article: PubMed Central - PubMed

Affiliation: NeuroEngineering Laboratory, Department of Electrical & Electronic Engineering, The University of Melbourne, Melbourne, VIC, Australia, 3010; Centre for Neural Engineering, The University of Melbourne, Melbourne, VIC, Australia, 3010.

ABSTRACT
This work describes a generalized method for classifying motor-related neural signals for a brain-computer interface (BCI), based on a stochastic machine learning method. The method differs from the various feature extraction and selection techniques employed in many other BCI systems. The classifier does not use extensive a-priori information, resulting in reduced reliance on highly specific domain knowledge. Instead of pre-defining features, the time-domain signal is input to a population of multi-layer perceptrons (MLPs) in order to perform a stochastic search for the best structure. The results showed that the average performance of the new algorithm outperformed other published methods using the Berlin BCI IV (2008) competition dataset and was comparable to the best results in the Berlin BCI II (2002-3) competition dataset. The new method was also applied to electroencephalography (EEG) data recorded from five subjects undertaking a hand squeeze task and demonstrated high levels of accuracy with a mean classification accuracy of 78.9% after five-fold cross-validation. Our new approach has been shown to give accurate results across different motor tasks and signal types as well as between subjects.

No MeSH data available.


Related in: MedlinePlus