Limits...
Reinforcement learning for adaptive threshold control of restorative brain-computer interfaces: a Bayesian simulation.

Bauer R, Gharabaghi A - Front Neurosci (2015)

Bottom Line: For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency.We then used the resulting vector for the simulation of continuous threshold adaptation.Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.

View Article: PubMed Central - PubMed

Affiliation: Division of Functional and Restorative Neurosurgery and Division of Translational Neurosurgery, Department of Neurosurgery, Eberhard Karls University Tuebingen Tuebingen, Germany ; Neuroprosthetics Research Group, Werner Reichardt Centre for Integrative Neuroscience, Eberhard Karls University Tuebingen Tuebingen, Germany.

ABSTRACT
Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.

No MeSH data available.


Shows the time course of action entropy (y-axis in decibel) over iterations (x-axis in logarithmic scale) expressed as action entropy during threshold adaptation on the basis of minimum action entropy (blue trace), and maximum instructional efficiency (red trace) divided by action entropy during training with a fixed threshold at maximum classification accuracy (black trace). Subplots show the illiterate (A), moderate (B), and expert (C) environment.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4325901&req=5

Figure 5: Shows the time course of action entropy (y-axis in decibel) over iterations (x-axis in logarithmic scale) expressed as action entropy during threshold adaptation on the basis of minimum action entropy (blue trace), and maximum instructional efficiency (red trace) divided by action entropy during training with a fixed threshold at maximum classification accuracy (black trace). Subplots show the illiterate (A), moderate (B), and expert (C) environment.

Mentions: Threshold adaptation was performed either following the vector of thresholds that resulted in maximum instructional efficiency (see red trace in Figure 4) or minimum action entropy (see blue trace in Figure 4), and compared to a threshold fixed at maximum classification accuracy. The comparison showed that adaptation based on the instructional efficiency resulted in a phase of comparatively higher action entropy during the training. Subsequently, however, the entropy decreased more rapidly and more steeply, as indicated by a crossing of the trace for adaptation (instructional efficiency) with the trace for fixed threshold (see Figure 5). This pattern was most pronounced for the illiterate environment (see Figure 5A), and similar in shape, but with lower magnitude for the other environments (see Figures 5B,C). Interestingly enough, the final relative entropy was also smaller for the illiterate environment (see Figure 5A).


Reinforcement learning for adaptive threshold control of restorative brain-computer interfaces: a Bayesian simulation.

Bauer R, Gharabaghi A - Front Neurosci (2015)

Shows the time course of action entropy (y-axis in decibel) over iterations (x-axis in logarithmic scale) expressed as action entropy during threshold adaptation on the basis of minimum action entropy (blue trace), and maximum instructional efficiency (red trace) divided by action entropy during training with a fixed threshold at maximum classification accuracy (black trace). Subplots show the illiterate (A), moderate (B), and expert (C) environment.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4325901&req=5

Figure 5: Shows the time course of action entropy (y-axis in decibel) over iterations (x-axis in logarithmic scale) expressed as action entropy during threshold adaptation on the basis of minimum action entropy (blue trace), and maximum instructional efficiency (red trace) divided by action entropy during training with a fixed threshold at maximum classification accuracy (black trace). Subplots show the illiterate (A), moderate (B), and expert (C) environment.
Mentions: Threshold adaptation was performed either following the vector of thresholds that resulted in maximum instructional efficiency (see red trace in Figure 4) or minimum action entropy (see blue trace in Figure 4), and compared to a threshold fixed at maximum classification accuracy. The comparison showed that adaptation based on the instructional efficiency resulted in a phase of comparatively higher action entropy during the training. Subsequently, however, the entropy decreased more rapidly and more steeply, as indicated by a crossing of the trace for adaptation (instructional efficiency) with the trace for fixed threshold (see Figure 5). This pattern was most pronounced for the illiterate environment (see Figure 5A), and similar in shape, but with lower magnitude for the other environments (see Figures 5B,C). Interestingly enough, the final relative entropy was also smaller for the illiterate environment (see Figure 5A).

Bottom Line: For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency.We then used the resulting vector for the simulation of continuous threshold adaptation.Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.

View Article: PubMed Central - PubMed

Affiliation: Division of Functional and Restorative Neurosurgery and Division of Translational Neurosurgery, Department of Neurosurgery, Eberhard Karls University Tuebingen Tuebingen, Germany ; Neuroprosthetics Research Group, Werner Reichardt Centre for Integrative Neuroscience, Eberhard Karls University Tuebingen Tuebingen, Germany.

ABSTRACT
Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.

No MeSH data available.