Proceedings of the 4th International Conference on Machine Learning and Soft Computing 2020
DOI: 10.1145/3380688.3380694
|View full text |Cite
|
Sign up to set email alerts
|

Valence-Arousal Model based Emotion Recognition using EEG, peripheral physiological signals and Facial Expression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…Lew et al [14] proposed a regionally operated domain adversarial network (RODAN) to learn spatial-temporal relations, which reported 62.93%/ and 63.97% in two dimensions, respectively. Zhu et al [15] obtained 78.47% and 72.2% accuracy scores by fusing multimodal decision-level features through a 3D convolutional network. Alazrai et al [16] achieved the best result of 86.6% and 85.8% in valence and arousal dimension and 72.5% in four classes by constructing high-resolution representation in time-frequency domain with a quadratic time-frequency distribution (QTFD).…”
Section: Results On Deap Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…Lew et al [14] proposed a regionally operated domain adversarial network (RODAN) to learn spatial-temporal relations, which reported 62.93%/ and 63.97% in two dimensions, respectively. Zhu et al [15] obtained 78.47% and 72.2% accuracy scores by fusing multimodal decision-level features through a 3D convolutional network. Alazrai et al [16] achieved the best result of 86.6% and 85.8% in valence and arousal dimension and 72.5% in four classes by constructing high-resolution representation in time-frequency domain with a quadratic time-frequency distribution (QTFD).…”
Section: Results On Deap Datasetmentioning
confidence: 99%
“…Zhu et al. [15] obtained 78.47% and 72.2% accuracy scores by fusing multimodal decision‐level features through a 3D convolutional network. Alazrai et al.…”
Section: Results On Deap Datasetmentioning
confidence: 99%
“…They achieved 80% accuracy for valence and 74% for arousal on the DEAP dataset using a subject-dependent strategy in a multimodal approach. In a similar study, Zhu et al ( 2020 ) used a weighted decision level fusion strategy for combining EEG, peripheral physiological signals, and facial expressions to recognize the arousal-valence state. They used a 3D convolutional neural network (CNN) to extract facial features and classify them, and they also used a 1D CNN to extract EEG features and classify them.…”
Section: Related Workmentioning
confidence: 99%
“…Combining different physiological signals for emotion recognition (Yazdani et al, 2012 ; Shu et al, 2018 ) or fusing only behavioral modalities have been widely explored (Busso et al, 2008 ; McKeown et al, 2011 ). Recently some studies tried to improve emotion recognition methods by exploiting both physiological and behavioral techniques (Zheng et al, 2018 ; Huang et al, 2019 ; Zhu et al, 2020 ). Many studies used a combination of facial expressions and EEG signals to achieve this improvement (Koelstra and Patras, 2013 ; Huang et al, 2017 ; Zhu et al, 2020 ).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation