This work presents the design, implementation, and evaluation of a P300-based brain-machine interface (BMI) developed to control a robotic hand-orthosis. The purpose of this system is to assist patients with amyotrophic lateral sclerosis (ALS) who cannot open and close their hands by themselves. The user of this interface can select one of six targets, which represent the flexion-extension of one finger independently or the movement of the five fingers simultaneously. We tested offline and online our BMI on eighteen healthy subjects (HS) and eight ALS patients. In the offline test, we used the calibration data of each participant recorded in the experimental sessions to estimate the accuracy of the BMI to classify correctly single epochs as target or non-target trials. On average, the system accuracy was 78.7% for target epochs and 85.7% for non-target trials. Additionally, we observed significant P300 responses in the calibration recordings of all the participants, including the ALS patients. For the BMI online test, each subject performed from 6 to 36 attempts of target selections using the interface. In this case, around 46% of the participants obtained 100% of accuracy, and the average online accuracy was 89.83%. The maximum information transfer rate (ITR) observed in the experiments was 52.83 bit/min, whereas that the average ITR was 18.13 bit/min. The contributions of this work are the following. First, we report the development and evaluation of a mind-controlled robotic hand-orthosis for patients with ALS. To our knowledge, this BMI is one of the first P300-based assistive robotic devices with multiple targets evaluated on people with ALS. Second, we provide a database with calibration data and online EEG recordings obtained in the evaluation of our BMI. This data is useful to develop and compare other BMI systems and test the processing pipelines of similar applications.
This work involved human subjects or animals in its research. Approval of all ethical and experimental procedures and protocols was granted by the Research and Ethical Committees of the National Institute of Rehabilitation ''LGII'' under Application No. 08/19, and performed in line with the Declaration of Helsinki.
Currently, one of the challenges in EEG-based brain-computer interfaces (BCI) for neurorehabilitation is the recognition of the intention to perform different movements from the same limb. This would allow finer control of neurorehabilitation and motor recovery devices by end-users. To address this issue, we assess the feasibility of recognizing two rehabilitative right upper-limb movements from premovement EEG signals. These rehabilitative movements were performed self-selected and self-initiated by the users using a motor rehabilitation robotic device. This work proposes anticipatory detection scenarios that discriminate EEG signals corresponding to non-movement state and movement intentions of two samelimb movements. The studied movements were discriminated above the empirical chance levels for all proposed detection scenarios. Percentages of correctly anticipated trials ranged from 64.3% to 77.0%, and the detection times ranged from 620 to 300 ms prior to movement initiation. The results of these studies indicate that it is possible to detect the intention to perform two different movements of the same upper limb and non-movement state. Based on these results, the decoding of the movement intention could potentially be used to develop more natural and intuitive robot-assisted neurorehabilitation therapies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.