Limits...
Mixed-norm regularization for brain decoding.

Flamary R, Jrad N, Phlypo R, Congedo M, Rakotomamonjy A - Comput Math Methods Med (2014)

Bottom Line: For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities.The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection.The multitask approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly.

View Article: PubMed Central - PubMed

Affiliation: Laboratoire Lagrange, UMR7293, Université de Nice, 00006 Nice, France.

ABSTRACT
This work investigates the use of mixed-norm regularization for sensor selection in event-related potential (ERP) based brain-computer interfaces (BCI). The classification problem is cast as a discriminative optimization framework where sensor selection is induced through the use of mixed-norms. This framework is extended to the multitask learning situation where several similar classification tasks related to different subjects are learned simultaneously. In this case, multitask learning helps in leveraging data scarcity issue yielding to more robust classifiers. For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities. The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection. The multitask approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly.

Show MeSH
AUC performances comparison with EPFL (a) and UAM (b) for 500 training examples per subject.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4016929&req=5

fig8: AUC performances comparison with EPFL (a) and UAM (b) for 500 training examples per subject.

Mentions: For the UAM dataset, results are quite different since the SVM-Full and MGSVM-2s show a significant improvement over the single-task learning. We also note that when only the joint channel selection regularizer is in play (MGSVM-2), multitask learning leads to poorer performance than the SVM-Full for a number of trials lower than 500. We justify this by the difficulty of achieving appropriate channel selection based only on few training examples, as confirmed by the performance of GSVM-2. From Figure 8, we can see that the good performance of MGSVM-2s is the outcome of performance improvement of about 10% AUC over SVM, achieved on some subjects that perform poorly. More importantly, while performances of these subjects are significantly increased, those that perform well still achieve good AUC scores. In addition, we emphasize that these improvements are essentially due to the similarity-inducing regularizer.


Mixed-norm regularization for brain decoding.

Flamary R, Jrad N, Phlypo R, Congedo M, Rakotomamonjy A - Comput Math Methods Med (2014)

AUC performances comparison with EPFL (a) and UAM (b) for 500 training examples per subject.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4016929&req=5

fig8: AUC performances comparison with EPFL (a) and UAM (b) for 500 training examples per subject.
Mentions: For the UAM dataset, results are quite different since the SVM-Full and MGSVM-2s show a significant improvement over the single-task learning. We also note that when only the joint channel selection regularizer is in play (MGSVM-2), multitask learning leads to poorer performance than the SVM-Full for a number of trials lower than 500. We justify this by the difficulty of achieving appropriate channel selection based only on few training examples, as confirmed by the performance of GSVM-2. From Figure 8, we can see that the good performance of MGSVM-2s is the outcome of performance improvement of about 10% AUC over SVM, achieved on some subjects that perform poorly. More importantly, while performances of these subjects are significantly increased, those that perform well still achieve good AUC scores. In addition, we emphasize that these improvements are essentially due to the similarity-inducing regularizer.

Bottom Line: For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities.The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection.The multitask approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly.

View Article: PubMed Central - PubMed

Affiliation: Laboratoire Lagrange, UMR7293, Université de Nice, 00006 Nice, France.

ABSTRACT
This work investigates the use of mixed-norm regularization for sensor selection in event-related potential (ERP) based brain-computer interfaces (BCI). The classification problem is cast as a discriminative optimization framework where sensor selection is induced through the use of mixed-norms. This framework is extended to the multitask learning situation where several similar classification tasks related to different subjects are learned simultaneously. In this case, multitask learning helps in leveraging data scarcity issue yielding to more robust classifiers. For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities. The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection. The multitask approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly.

Show MeSH