Recently, the study of emotion recognition has received increasing attentions by the rapid development of noninvasive sensor technologies, machine learning algorithms and compute capability of computers. Compared with single modal emotion recognition, the multimodal paradigm introduces complementary information for emotion recognition. Hence, in this work, we presented a decision level fusion framework for detecting emotions continuously by fusing the Electroencephalography (EEG) and facial expressions. Three types of movie clips (positive, negative, and neutral) were utilized to elicit specific emotions of subjects, the EEG and facial expression signals were recorded simultaneously. The power spectrum density (PSD) features of EEG were extracted by time-frequency analysis, and then EEG features were selected for regression. For the facial expression, the facial geometric features were calculated by facial landmark localization. Long short-term memory networks (LSTM) were utilized to accomplish the decision level fusion and captured temporal dynamics of emotions. The results have shown that the proposed method achieved outstanding performance for continuous emotion recognition, and it yields 0.625±0.029 of concordance correlation coefficient (CCC). From the results, the fusion of two modalities outperformed EEG and facial expression separately. Furthermore, different numbers of time-steps of LSTM was applied to analyze the temporal dynamic capturing.INDEX TERMS Continuous emotion recognition, EEG, facial expressions, signal processing, decision level fusion, temporal dynamics.
The BCI and PoE technology, combined with smart home system, overcoming the shortcomings of traditional systems and achieving home applications management rely on EEG signal. In this paper, we proposed an online steady-state visual evoked potential (SSVEP) based BCI system on controlling several smart home devices.
The brain-computer interface (BCI) plays an important role in assisting patients with amyotrophic lateral sclerosis (ALS) to enable them to participate in communication and entertainment. In this study, a novel channel projection-based canonical correlation analysis (CP-CCA) target identification method for steady-state visual evoked potential- (SSVEP-) based BCI system was proposed. The single-channel electroencephalography (EEG) signals of multiple trials were recorded when the subject is under the same stimulus frequency. The CCAs between single-channel EEG signals of multiple trials and sine-cosine reference signals were obtained. Then, the optimal reference signal of each channel was utilized to estimate the test EEG signal. To validate the proposed method, we acquired the training dataset with two testing conditions including the optimal time window length and the number of the trial of training data. The offline experiments conducted a comparison of the proposed method with the traditional canonical correlation analysis (CCA) and power spectrum density analysis (PSDA) method using a 5-class SSVEP dataset that was recorded from 10 subjects. Based on the training dataset, the online 3D-helicopter control experiment was carried out. The offline experimental results showed that the proposed method outperformed the CCA and the PSDA methods in terms of classification accuracy and information transfer rate (ITR). Furthermore, the online experiments of 3-DOF helicopter control achieved an average accuracy of 87.94 ± 5.93% with an ITR of 21.07 ± 4.42 bit/min.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.