Every year, millions of patients regain consciousness during surgery and can potentially suffer from posttraumatic disorders. We recently showed that the detection of motor activity during a median nerve stimulation from electroencephalographic (EEG) signals could be used to alert the medical staff that a patient is waking up and trying to move under general anesthesia [1], [2]. In this work, we measure the accuracy and false positive rate in detecting motor imagery of several deep learning models (EEGNet, deep convolutional network and shallow convolutional network) directly trained on filtered EEG data. We compare them with efficient non-deep approaches, namely, a linear discriminant analysis based on common spatial patterns, the minimum distance to Riemannian mean algorithm applied to covariance matrices, a logistic regression based on a tangent space projection of covariance matrices (TS+LR). The EEGNet improves significantly the classification performance comparing to other classifiers (pvalue < 0.01); moreover it outperforms the best non-deep classifier (TS+LR) for 7.2% of accuracy. This approach promises to improve intraoperative awareness detection during general anesthesia.
Permutation entropy (PE) measure is investigated for the cases when different time lags are used to obtain embedded vectors and the signal under analysis has different sampling rate. PE of orders from 2 to 5 was calculated for the electroencephalogram signals from healthy patients for raw signal and sequence of downsampled signals with different time lags. For various combinations of downsampling factor and time lag the averaged PE plots indicated that PE of a signal appears to be the same for certain combinations. It is defined that PE values would be equal for combinations of integer lags and downsampling factors whose product gives same result. The need of specifying both time lag and sampling rate or time interval between samples in patterns, when studying permutation entropy is emphasized, and this can be extended to other techniques employing the embedded patterns analysis.
In this article, we study how combined motor imageries can be detected to deliver more commands in a Brain-Computer Interface for controlling a robotic arm. Motor imageries are a major way to deliver commands in BCI. Nevertheless only a few systems use more than three motor imageries: right hand, left hand and feet. Combining them allow to get four additional commands. We present an electrophysiological study to show that i) simple motor imageries have mainly an electrical modulation over the cortical area related the body part involved in the imagined movement and that ii) combined motor imageries reflect a superposition of the electrical activity of simple motor imageries. A shrinkage linear discriminant analysis has been used to test as a first step how a resting state and seven motor imageries can be detected. 11 healthy subjects participated in the experiment for which an intuitive assignment has been done to associate motor imageries and movements of the robotic arm with 7 degrees of freedom.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.