2019
DOI: 10.3390/s19235218
|View full text |Cite
|
Sign up to set email alerts
|

EEG-Based Multi-Modal Emotion Recognition using Bag of Deep Features: An Optimal Feature Selection Approach

Abstract: Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of fe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
38
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 84 publications
(40 citation statements)
references
References 48 publications
(59 reference statements)
2
38
0
Order By: Relevance
“…We remark that recently published classification techniques from the literature have reported higher classification results than the WINkNN classifier on the SEED (EEG) dataset but at the expense of considerably more time for computation, using all 64 channels as well as all frequency bands; moreover, ad hoc features had to be defined [14].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We remark that recently published classification techniques from the literature have reported higher classification results than the WINkNN classifier on the SEED (EEG) dataset but at the expense of considerably more time for computation, using all 64 channels as well as all frequency bands; moreover, ad hoc features had to be defined [14].…”
Section: Discussionmentioning
confidence: 99%
“…Table 3 in [12] presents some of the most popular features extracted from time-series including temporal, statistical, spectral, linear, and/or non-linear features. Classifiers used to recognize emotional states, typically, regard supervised learning including k nearest neighbor (kNN) [14][15][16][17], support vector machine (SVM) [17][18][19], Naive-Bayes (NB) [17], quadratic discriminant analysis (QDA) [20], artificial neural networks [21,22]. Furthermore, unsupervised and semi-supervised learning algorithms are also used [12].…”
Section: Introductionmentioning
confidence: 99%
“…Naturally, the emotion perception of humans is not just determined by one type of information; it is triggered by a multitude of factors or signals emitted from others. Many studies have utilized multimodality (i.e., visual, audio, and text) to improve the performance of emotion recognition [ 19 , 21 , 22 , 23 , 51 ]. Zhou et al [ 51 ] and Tripathi et al [ 19 ] modeled the relationships among text, visual, and audio modalities by deep learning methods to improve performance.…”
Section: Related Workmentioning
confidence: 99%
“…Majumder et al [ 23 ] used the contextual multi-modality information in a dialogue to detect human social emotions. In addition to the modalities that can be extracted from video, Asghar et al [ 21 ] combined the electroencephalography (EEG) modality to facilitate the model’s performance. In our study, we combined text, visual, and audio modalities to construct a multi-modality emotion recognition model.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation