Brain-computer interfaces (BCI) are communication systems that allow people to send messages or commands without movement. BCIs rely on different types of signals in the electroencephalogram (EEG), typically P300s, steady-state visually evoked potentials (SSVEP), or event-related desynchronization. Early BCI systems were often evaluated with a selected group of subjects. Also, many articles do not mention data from subjects who performed poorly. These and other factors have made it difficult to estimate how many people could use different BCIs. The present study explored how many subjects could use an SSVEP BCI. We recorded data from 53 subjects while they participated in 1–4 runs that were each 4 min long. During these runs, the subjects focused on one of four LEDs that each flickered at a different frequency. The eight channel EEG data were analyzed with a minimum energy parameter estimation algorithm and classified with linear discriminant analysis into one of the four classes. Online results showed that SSVEP BCIs could provide effective communication for all 53 subjects, resulting in a grand average accuracy of 95.5%. About 96.2% of the subjects reached an accuracy above 80%, and nobody was below 60%. This study showed that SSVEP based BCI systems can reach very high accuracies after only a very short training period. The SSVEP approach worked for all participating subjects, who attained accuracy well above chance level. This is important because it shows that SSVEP BCIs could provide communication for some users when other approaches might not work for them.
A Brain-Computer Interface (BCI) provides a completely new output pathway that can provide an additional option for a person to express himself/herself if he/she suffers a disorder like amyotrophic lateral sclerosis (ALS), brainstem stroke, brain or spinal cord injury or other diseases which impair the function of the common output pathways which are responsible for the control of muscles. For a P300 based BCI a matrix of randomly flashing characters is presented to the participant. To spell a character the person has to attend to it and to count how many times the character flashes. Although most BCIs are designed to help people with disabilities, they are mainly tested on healthy, young subjects who may achieve better results than people with impairments. In this study we compare measurements, performed on people suffering motor impairments, such as stroke or ALS, to measurements performed on healthy people. The overall accuracy of the persons with motor impairments reached 70.1% in comparison to 91% obtained for the group of healthy subjects. When looking at single subjects, one interesting example shows that under certain circumstances, when it is difficult for a patient to concentrate on one character for a longer period of time, the accuracy is higher when fewer flashes (i.e., stimuli) are presented. Furthermore, the influence of several tuning parameters is discussed as it shows that for some participants adaptations for achieving valuable spelling results are required. Finally, exclusion criteria for people who are not able to use the device are defined.
A brain-computer interface (BCI) translates brain activity into commands to control devices or software. Common approaches are based on visual evoked potentials (VEP), extracted from the electroencephalogram (EEG) during visual stimulation. High information transfer rates (ITR) can be achieved using (i) steady-state VEP (SSVEP) or (ii) code-modulated VEP (c-VEP). This study investigates how applicable such systems are for continuous control of robotic devices and which method performs best. Eleven healthy subjects steered a robot along a track using four BCI controls on a computer screen in combination with feedback video of the movement. The average time to complete the tasks was (i) 573.43 s and (ii) 222.57 s. In a second non-continuous trial-based validation run the maximum achievable online classification accuracy over all subjects was (i) 91.36 % and (ii) 98.18 %. This results show that the c-VEP fits the needs of a continuous system better than the SSVEP implementation.
Decoding brain activity of corresponding highlevel tasks may lead to an independent and intuitively controlled Brain-Computer Interface (BCI). Most of today's BCI research focuses on analyzing the electroencephalogram (EEG) which provides only limited spatial and temporal resolution. Derived electrocorticographic (ECoG) signals allow the investigation of spatially highly focused task-related activation within the high-gamma frequency band, making the discrimination of individual finger movements or complex grasping tasks possible. Common spatial patterns (CSP) are commonly used for BCI systems and provide a powerful tool for feature optimization and dimensionality reduction. This work focused on the discrimination of (i) three complex hand movements, as well as (ii) hand movement and idle state. Two subjects S1 and S2 performed single `open', `peace' and `fist' hand poses in multiple trials. Signals in the high-gamma frequency range between 100 and 500 Hz were spatially filtered based on a CSP algorithm for (i) and (ii). Additionally, a manual feature selection approach was tested for (i). A multi-class linear discriminant analysis (LDA) showed for (i) an error rate of 13.89 % / 7.22 % and 18.42 % / 1.17 % for S1 and S2 using manually / CSP selected features, where for (ii) a two class LDA lead to a classification error of 13.39 % and 2.33 % for S1 and S2, respectively.
Intention recognition through decoding brain activity could lead to a powerful and independent Brain-Computer-Interface (BCI) allowing for intuitive control of devices like robots. A common strategy for realizing such a system is the motor imagery (MI) BCI using electroencephalography (EEG). Changing to invasive recordings like electrocorticography (ECoG) allows extracting very robust features and easy introduction of an idle state, which might simplify the mental task and allow the subject to focus on the environment. Especially for multi-channel recordings like ECoG, common spatial patterns (CSP) provide a powerful tool for feature optimization and dimensionality reduction. This work focuses on an invasive and independent MI BCI that allows triggering from an idle state, and therefore facilitates tele-operation of a humanoid robot. The task was to lift a can with the robot's hand. One subject participated and reached 95.4 % mean online accuracy after six runs of 40 trials. To our knowledge, this is the first online experiment with a MI BCI using CSPs from ECoG signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.