2017
DOI: 10.14569/ijacsa.2017.080955
|View full text |Cite
|
Sign up to set email alerts
|

Classification of Human Emotions from Electroencephalogram (EEG) Signal using Deep Neural Network

Abstract: Abstract-Estimationof human emotions from Electroencephalogram (EEG) signals plays a vital role in developing robust Brain-Computer Interface (BCI) systems. In our research, we used Deep Neural Network (DNN) to address EEG-based emotion recognition. This was motivated by the recent advances in accuracy and efficiency from applying deep learning techniques in pattern recognition and classification applications. We adapted DNN to identify human emotions of a given EEG signal (DEAP dataset) from power spectral de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
59
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 85 publications
(68 citation statements)
references
References 22 publications
2
59
0
Order By: Relevance
“…The comparison of our model with previous studies. research method average accuracy cross validation valence arousal Li et al [16] C-RNN 72.06% 74.12% 5-fold Al-Nafjan et al [17] PSD+DNN 82.00% 82.00% 10-fold Liu et al [18] Multimodal Deep Learning 85.20% 80.50% 10-fold our model ResNet+LFCC+KNN 90.39% 89.06% 10-fold low valence/high arousal low valence/low arousal high valence/high arousal high valence), the best accuracy of the proposed approach is 90.21%. And the performance with different classifiers is shown in Fig.…”
Section: Results For 10-fold Cross Validationmentioning
confidence: 99%
“…The comparison of our model with previous studies. research method average accuracy cross validation valence arousal Li et al [16] C-RNN 72.06% 74.12% 5-fold Al-Nafjan et al [17] PSD+DNN 82.00% 82.00% 10-fold Liu et al [18] Multimodal Deep Learning 85.20% 80.50% 10-fold our model ResNet+LFCC+KNN 90.39% 89.06% 10-fold low valence/high arousal low valence/low arousal high valence/high arousal high valence), the best accuracy of the proposed approach is 90.21%. And the performance with different classifiers is shown in Fig.…”
Section: Results For 10-fold Cross Validationmentioning
confidence: 99%
“…This is because, technically, including the channels in the model, which are not correlated with the emotion changes, does not help and on the contrary can adversely affect the accuracy. It is also known that the electrical relations between asymmetrical channels are determining the arousal and valence, hence the emotion [73,74]. Therefore, we chose four asymmetrical pairs of electrodes: AF1, F3, F4, F7, T7, AF2, F5, F8 and T8 from frontal and temporal lobes which are equally spread on the skull.…”
Section: Channel Selectionmentioning
confidence: 99%
“…We outline two different EEG preprocessing approaches (Section 2.1) and, in this context, we evaluate (Section 3) the discriminative capacity of various EEG features (Section 2.2), which were reported successful in previous related studies [23][24][25][28][29][30][31][32][33][34]. These EEG features are based on the following: Next, in Section 2.3 we outline the DEAP database, and in Section 2.4 the common experimental protocol used in all experiments.…”
Section: Methodsmentioning
confidence: 99%
“…In studies that use PSD as an EEG feature, traditionally all five-alpha, beta, gamma, delta and theta-frequency bands are considered [8,23,24]. In some cases, low frequency bands were omitted [25] or only specific bands (such as alpha and beta) were used [26,27]. The typical way of calculation for PSD-based features is through a short-time Discrete Fourier Transform (stDFT) (in fact, Fast Fourier Transform (FFT)) applied on non-overlapping frames of the segmented EEG signal [24,26].…”
Section: Introductionmentioning
confidence: 99%