2023
DOI: 10.36227/techrxiv.15184302.v2
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Emotion Recognition using Facial, Speech and Textual Cues: A Survey

Abstract: With the development of social media and human-computer interaction, it is essential to serve people by perceiving people's emotional state in videos. In recent years, a large number of studies tackle the issue of emotion recognition based on three most common modalities in videos, that is, face, speech and text. The focus of this paper is to sort out the relevant studies of emotion recognition using facial, speech and textual cues based on deep learning techniques due to the lack of review papers concentratin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 93 publications
(117 reference statements)
0
1
0
Order By: Relevance
“…Koromilas et al [22] exhaustively introduced an intermediate fusion method based on a deep multimodal learning architecture, but failed to mention early fusion and late fusion techniques. Zhang et al [23] thoroughly depicted single-modal emotion recognition technologies, such as facial expression recognition, speech emotion recognition, and text emotion recognition. For MER, only feature-level and decision-level fusion methods were summarized.…”
Section: Introductionmentioning
confidence: 99%
“…Koromilas et al [22] exhaustively introduced an intermediate fusion method based on a deep multimodal learning architecture, but failed to mention early fusion and late fusion techniques. Zhang et al [23] thoroughly depicted single-modal emotion recognition technologies, such as facial expression recognition, speech emotion recognition, and text emotion recognition. For MER, only feature-level and decision-level fusion methods were summarized.…”
Section: Introductionmentioning
confidence: 99%