Limits...
The self-paced graz brain-computer interface: methods and applications.

Scherer R, Schloegl A, Lee F, Bischof H, Jansa J, Pfurtscheller G - Comput Intell Neurosci (2007)

Bottom Line: The presented system is able to automatically reduce electrooculogram (EOG) artifacts, to detect electromyographic (EMG) activity, and uses only three bipolar EEG channels.Two applications are presented: the freeSpace virtual environment (VE) and the Brainloop interface.The Brainloop interface provides an interface between the Graz-BCI and Google Earth.

View Article: PubMed Central - PubMed

Affiliation: Laboratory of Brain-Computer Interfaces, Institute for Knowledge Discovery, Graz University of Technology, Krenngasse 37, 8010 Graz, Austria. reinhold.scherer@tugraz.at

ABSTRACT
We present the self-paced 3-class Graz brain-computer interface (BCI) which is based on the detection of sensorimotor electroencephalogram (EEG) rhythms induced by motor imagery. Self-paced operation means that the BCI is able to determine whether the ongoing brain activity is intended as control signal (intentional control) or not (non-control state). The presented system is able to automatically reduce electrooculogram (EOG) artifacts, to detect electromyographic (EMG) activity, and uses only three bipolar EEG channels. Two applications are presented: the freeSpace virtual environment (VE) and the Brainloop interface. The freeSpace is a computer-game-like application where subjects have to navigate through the environment and collect coins by autonomously selecting navigation commands. Three subjects participated in these feedback experiments and each learned to navigate through the VE and collect coins. Two out of the three succeeded in collecting all three coins. The Brainloop interface provides an interface between the Graz-BCI and Google Earth.

No MeSH data available.


(a) Screenshot of the “Brainloop” interface. Theupper part of the screen was used to select the command. The available optionswere presented in a scroll bar in the lower part of the screen. (b) Availablecommands for operating Google Earth and used motor imagery tasks. (c) The fourlevels of selections. (d). Photographs of the “Brainloop” performance.
© Copyright Policy - open-access
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2266812&req=5

fig4: (a) Screenshot of the “Brainloop” interface. Theupper part of the screen was used to select the command. The available optionswere presented in a scroll bar in the lower part of the screen. (b) Availablecommands for operating Google Earth and used motor imagery tasks. (c) The fourlevels of selections. (d). Photographs of the “Brainloop” performance.

Mentions: The Brainloopinterface for Google Earth was implemented in Java (Sun Microsystems Inc.,Santa Clara, CA, USA). The communication with the BCI was realized by means ofthe UDP protocol; the communication with Google Earth by using the TCP/IPprotocol. A multilevel selection procedure was created to access the wholefunctional range of Google Earth. Figure 4(a) shows a screen shot of theinterface. The user was represented by an icon positioned in the center of thedisplay. The commands at the user's disposal were placed around this icon andcould be selected by moving the feedback cursor into the desired direction. Thethree main commands “scroll,” “select” and “back” were selected by movingthe cursor to the left, right, or down, respectively. After each command GoogleEarth's virtual camera moved to the corresponding position. By combining thecursor movement down with left or right, the commands “show borders” and“show cities” were activated (Figure 4(b)). During the transition time tT the feedbackcursor was moving towards the desired control command (NC to IC) or back to theuser icon presented in the middle of the screen (IC to NC). Once the feedbackcursor was close to the command, this was highlighted and accepted. Figure 4(c)summarizes the four hierarchically arranged selection levels. Levels 1 to 3were needed to select the continent, the continental area and the country. Thescroll bar at level 4 contained commands for the virtual camera (“scan,”“move,” “pan,” “tilt,” and “zoom”). For this level also the assignmentof the commands was changed (see Figure 4(b)). Every selection was made byscrolling through the available options and picking the highlighted one. Whilethe “scroll” command was selected, the options were scrolling at a speed ofapproximately 2 items/s from the right to the left. For more details on theinterface please see [22].


The self-paced graz brain-computer interface: methods and applications.

Scherer R, Schloegl A, Lee F, Bischof H, Jansa J, Pfurtscheller G - Comput Intell Neurosci (2007)

(a) Screenshot of the “Brainloop” interface. Theupper part of the screen was used to select the command. The available optionswere presented in a scroll bar in the lower part of the screen. (b) Availablecommands for operating Google Earth and used motor imagery tasks. (c) The fourlevels of selections. (d). Photographs of the “Brainloop” performance.
© Copyright Policy - open-access
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2266812&req=5

fig4: (a) Screenshot of the “Brainloop” interface. Theupper part of the screen was used to select the command. The available optionswere presented in a scroll bar in the lower part of the screen. (b) Availablecommands for operating Google Earth and used motor imagery tasks. (c) The fourlevels of selections. (d). Photographs of the “Brainloop” performance.
Mentions: The Brainloopinterface for Google Earth was implemented in Java (Sun Microsystems Inc.,Santa Clara, CA, USA). The communication with the BCI was realized by means ofthe UDP protocol; the communication with Google Earth by using the TCP/IPprotocol. A multilevel selection procedure was created to access the wholefunctional range of Google Earth. Figure 4(a) shows a screen shot of theinterface. The user was represented by an icon positioned in the center of thedisplay. The commands at the user's disposal were placed around this icon andcould be selected by moving the feedback cursor into the desired direction. Thethree main commands “scroll,” “select” and “back” were selected by movingthe cursor to the left, right, or down, respectively. After each command GoogleEarth's virtual camera moved to the corresponding position. By combining thecursor movement down with left or right, the commands “show borders” and“show cities” were activated (Figure 4(b)). During the transition time tT the feedbackcursor was moving towards the desired control command (NC to IC) or back to theuser icon presented in the middle of the screen (IC to NC). Once the feedbackcursor was close to the command, this was highlighted and accepted. Figure 4(c)summarizes the four hierarchically arranged selection levels. Levels 1 to 3were needed to select the continent, the continental area and the country. Thescroll bar at level 4 contained commands for the virtual camera (“scan,”“move,” “pan,” “tilt,” and “zoom”). For this level also the assignmentof the commands was changed (see Figure 4(b)). Every selection was made byscrolling through the available options and picking the highlighted one. Whilethe “scroll” command was selected, the options were scrolling at a speed ofapproximately 2 items/s from the right to the left. For more details on theinterface please see [22].

Bottom Line: The presented system is able to automatically reduce electrooculogram (EOG) artifacts, to detect electromyographic (EMG) activity, and uses only three bipolar EEG channels.Two applications are presented: the freeSpace virtual environment (VE) and the Brainloop interface.The Brainloop interface provides an interface between the Graz-BCI and Google Earth.

View Article: PubMed Central - PubMed

Affiliation: Laboratory of Brain-Computer Interfaces, Institute for Knowledge Discovery, Graz University of Technology, Krenngasse 37, 8010 Graz, Austria. reinhold.scherer@tugraz.at

ABSTRACT
We present the self-paced 3-class Graz brain-computer interface (BCI) which is based on the detection of sensorimotor electroencephalogram (EEG) rhythms induced by motor imagery. Self-paced operation means that the BCI is able to determine whether the ongoing brain activity is intended as control signal (intentional control) or not (non-control state). The presented system is able to automatically reduce electrooculogram (EOG) artifacts, to detect electromyographic (EMG) activity, and uses only three bipolar EEG channels. Two applications are presented: the freeSpace virtual environment (VE) and the Brainloop interface. The freeSpace is a computer-game-like application where subjects have to navigate through the environment and collect coins by autonomously selecting navigation commands. Three subjects participated in these feedback experiments and each learned to navigate through the VE and collect coins. Two out of the three succeeded in collecting all three coins. The Brainloop interface provides an interface between the Graz-BCI and Google Earth.

No MeSH data available.