Emotion is a complex set of interactions among subjective and objective factors governed by neural/hormonal systems resulting in the arousal of feelings and generate cognitive processes, activate physiological changes such as behavior. Emotion recognition can be correctly done by EEG signals. Electroencephalogram (EEG) is the direct reflection of the activities of hundreds and millions of neurons residing within the brain. Different emotion states create distinct EEG signals in different brain regions. Therefore EEG provides reliable technique to identify the underlying emotion information. This paper proposes a novel approach to recognize users emotions from electroencephalogram (EEG) signals. Audio signals are used as stimuli to elicit positive and negative emotions of subjects. For eight healthy subjects, EEG signals are acquired using seven channels of an EEG amplifier. The result reveal that frontal, temporal and parietal regions of the brain are relevant to positive emotion recognition and frontal and parietal regions are activated in case of negative emotion identification. After proper signal processing of the raw EEG, for the whole frequency bands the features are extracted from each channel of the EEG signals by Multifractral Detrended Fluctuation Analysis (MFDFA) method. We introduce an effective classifier named Support Vector Machine (SVM) to categorize the EEG feature space related to various emotional states into their respective classes. Next, we compare Support Vector Machine (SVM) with various other methods like Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA) and K Nearest Neighbor (KNN). The average classification accuracy of SVM for positive emotions on the whole frequency bands is 84.50%, while the accuracy of QDA is 76.50% and with LDA 75.25% and KNN is only 69.625% whereas, for negative emotions it is 82.50%, while for QDA is 72.375% and with LDA 65.125% and KNN is only 70.50%.
Abstract:In this study, for recognition of (positive, neutral and negative) emotions using EOG signals, subjects were stimulated with audio-visual stimulus to elicit emotions. Hjorth parameters and Discrete Wavelet Transform (DWT) (Haar mother wavelet) were employed as feature extractor. Support Vector Machine (SVM) and Naïve Bayes (NB) were used for classifying the emotions. The results of multiclass classifications in terms of classification accuracy show best performance with the combination DWT+SVM and Hjorth+NB for each of the emotions. The average SVM classifier's accuracy with DWT for horizontal and vertical eye movement are 81%, 76.33%, 78.61% and are 79.85%, 75.63% and 77.67% respectively. The experimental results show the average recognition rate of 78.43%, 74.61%, and 76.34% for horizontal and 77.11%, 74.03%, and 75.84% for vertical eye movement when Naïve Bayes group with Hjorth parameter. Above result indicates that it has the potential to be used as real-time EOG-based emotion assessment system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.