Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN) to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE) using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA) is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers.
We propose to use real-time EEG signal to classify happy and unhappy emotions elicited by pictures and classical music. We use PSD as a feature and SVM as a classifier. The average accuracies of subject-dependent model and subject-independent model are approximately 75.62% and 65.12%, respectively. Considering each pair of channels, temporal pair of channels (T7 and T8) gives a better result than the other area. Considering different frequency bands, high-frequency bands (Beta and Gamma) give a better result than low-frequency bands. Considering different time durations for emotion elicitation, that result from 30 seconds does not have significant difference compared with the result from 60 seconds. From all of these results, we implement real-time EEG-based happiness detection system using only one pair of channels. Furthermore, we develop games based on the happiness detection system to help user recognize and control the happiness.
Introduction This study examines the clinical efficacy of a game-based neurofeedback training (NFT) system to enhance cognitive performance in patients with amnestic mild cognitive impairment (aMCI) and healthy elderly subjects. The NFT system includes five games designed to improve attention span and cognitive performance. The system estimates attention levels by investigating the power spectrum of Beta and Alpha bands. Methods We recruited 65 women with aMCI and 54 healthy elderly women. All participants were treated with care as usual (CAU); 58 were treated with CAU + NFT (20 sessions of 30 minutes each, 2–3 sessions per week), 36 with CAU + exergame-based training, while 25 patients had only CAU. Cognitive functions were assessed using the Cambridge Neuropsychological Test Automated Battery both before and after treatment. Results NFT significantly improved rapid visual processing and spatial working memory (SWM), including strategy, when compared with exergame training and no active treatment. aMCI was characterized by impairments in SWM (including strategy), pattern recognition memory, and delayed matching to samples. Conclusion In conclusion, treatment with NFT improves sustained attention and SWM. Nevertheless, NFT had no significant effect on pattern recognition memory and short-term visual memory, which are the other hallmarks of aMCI. The NFT system used here may selectively improve sustained attention, strategy, and executive functions, but not other cognitive impairments, which characterize aMCI in women.
For future healthcare applications, which are increasingly moving towards out-of-hospital or home-based caring models, the ability to remotely and continuously monitor patients’ conditions effectively are imperative. Among others, emotional state is one of the conditions that could be of interest to doctors or caregivers. This paper discusses a preliminary study to develop a wearable device that is a low cost, single channel, dry contact, in-ear EEG suitable for non-intrusive monitoring. All aspects of the designs, engineering, and experimenting by applying machine learning for emotion classification, are covered. Based on the valence and arousal emotion model, the device is able to classify basic emotion with 71.07% accuracy (valence), 72.89% accuracy (arousal), and 53.72% (all four emotions). The results are comparable to those measured from the more conventional EEG headsets at T7 and T8 scalp positions. These results, together with its earphone-like wearability, suggest its potential usage especially for future healthcare applications, such as home-based or tele-monitoring systems as intended.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.