2019
DOI: 10.1109/access.2019.2949707
|View full text |Cite
|
Sign up to set email alerts
|

The Fusion of Electroencephalography and Facial Expression for Continuous Emotion Recognition

Abstract: Recently, the study of emotion recognition has received increasing attentions by the rapid development of noninvasive sensor technologies, machine learning algorithms and compute capability of computers. Compared with single modal emotion recognition, the multimodal paradigm introduces complementary information for emotion recognition. Hence, in this work, we presented a decision level fusion framework for detecting emotions continuously by fusing the Electroencephalography (EEG) and facial expressions. Three … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(27 citation statements)
references
References 42 publications
0
27
0
Order By: Relevance
“…In Figure 4 , we depict the process of EEG based human emotion recognition discussed in [ 92 ]. Several approaches have used EEG for emotion recognition [ 93 , 94 ]. For instance, the proposal [ 95 ] uses EEG to collect peripheral signals.…”
Section: Types Of Activity Monitoring and Methodologiesmentioning
confidence: 99%
“…In Figure 4 , we depict the process of EEG based human emotion recognition discussed in [ 92 ]. Several approaches have used EEG for emotion recognition [ 93 , 94 ]. For instance, the proposal [ 95 ] uses EEG to collect peripheral signals.…”
Section: Types Of Activity Monitoring and Methodologiesmentioning
confidence: 99%
“…The representation ability of the dimensional model is more reliable and accurate than in the discrete model. Some researchers treat emotion recognition as continuous problem (Soleymani et al 2015;Sen and Sert 2018;Li et al 2019). To simplify the problem, other researchers usually treat the dimension prediction as a classification task, such as two-class (low/high), threeclass (lower/middle/higher), and four-class (quadrants of arousal/valence space) (Gunes and Schuller 2013).…”
Section: Emotion Modelmentioning
confidence: 99%
“…Yohanes et al (2012) proposed to extract the EEG feature using discrete wavelet transform coefficients to utilize the temporal information. Li et al (2019) employed power spectrum density to extract features. Peng et al (2019) also explored using phase lag index to construct function connectivity to diagnose depression.…”
Section: Physiological Signal Emotion Recognitionmentioning
confidence: 99%
“…Happy and Routray [8] used image processing to detect emotional states in facial expression. Li et al [9] merged facial image processing with electroencephalography (EEG) for improved emotional state detection, indicating that affective systems benefit from being multimodal. Yang et al [10] demonstrated emotion detection through speech for AI-based home assistants.…”
Section: Introductionmentioning
confidence: 99%