Accurate binary classification of electroencephalography (EEG) signals is a challenging task for the development of motor imagery (MI) brain computer interface (BCI) systems. In this study two sliding window techniques are proposed to enhance binary classification of motor imagery (MI). The first one calculates the longest consecutive repetition (LCR) of the sequence of prediction of all the sliding windows which is named as SW-LCR. The second calculates the mode of the sequence of prediction of all the sliding windows and is named SW-Mode. Common spatial pattern (CSP) is used for extracting features with linear discriminant analysis (LDA) used for classification of each time window. Both the SW-LCR and SW-Mode are applied on publicly available BCI Competition IV-2a dataset of healthy individuals and on a stroke patients dataset. As compared to the existing state-of-the-art the SW-LCR performed better in the case of healthy individuals and SW-Mode performed better on stroke patients dataset for left vs. right hand MI with lower standard deviation. For both the datasets the classification accuracy (CA) was approximately 80% and kappa (κ) was 0.6. The results show that the sliding window based prediction of MI using SW-LCR and SW-Mode is robust against inter-trial and inter-session inconsistencies in the time of activation within a trial and thus can lead to reliable performance in a neurorehabilitative BCI setting.
Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.
The performance of a brain–computer interface (BCI) will generally improve by increasing the volume of training data on which it is trained. However, a classifier’s generalization ability is often negatively affected when highly non-stationary data are collected across both sessions and subjects. The aim of this work is to reduce the long calibration time in BCI systems by proposing a transfer learning model which can be used for evaluating unseen single trials for a subject without the need for training session data. A method is proposed which combines a generalization of the previously proposed subject-specific “multivariate empirical-mode decomposition” preprocessing technique by taking a fixed band of 8–30[Formula: see text]Hz for all four motor imagery tasks and a novel classification model which exploits the structure of tangent space features drawn from the Riemannian geometry framework, that is shared among the training data of multiple sessions and subjects. Results demonstrate comparable performance improvement across multiple subjects without subject-specific calibration, when compared with other state-of-the-art techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.