2022
DOI: 10.3390/mti6060047
|View full text |Cite
|
Sign up to set email alerts
|

A Survey on Databases for Multimodal Emotion Recognition and an Introduction to the VIRI (Visible and InfraRed Image) Database

Abstract: Multimodal human–computer interaction (HCI) systems pledge a more human–human-like interaction between machines and humans. Their prowess in emanating an unambiguous information exchange between the two makes these systems more reliable, efficient, less error prone, and capable of solving complex tasks. Emotion recognition is a realm of HCI that follows multimodality to achieve accurate and natural results. The prodigious use of affective identification in e-learning, marketing, security, health sciences, etc.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 203 publications
0
10
0
Order By: Relevance
“…There are several main methods for sentiment analysis of multimodal data [13]: 1) Fusion feature methods: the features of different modalities are fused, and then sentiment analysis is performed using traditional machine learning algorithms or deep learning models. Common fusion feature methods include feature-level fusion and decision-level fusion [13]. 2) Multimodal feature learning methods: mapping features of different modalities into a shared feature space by learning the correlation between multimodal data.…”
Section: Related Work 21 Sentiment Analysis Of Multimodal Datamentioning
confidence: 99%
See 1 more Smart Citation
“…There are several main methods for sentiment analysis of multimodal data [13]: 1) Fusion feature methods: the features of different modalities are fused, and then sentiment analysis is performed using traditional machine learning algorithms or deep learning models. Common fusion feature methods include feature-level fusion and decision-level fusion [13]. 2) Multimodal feature learning methods: mapping features of different modalities into a shared feature space by learning the correlation between multimodal data.…”
Section: Related Work 21 Sentiment Analysis Of Multimodal Datamentioning
confidence: 99%
“…Multimodal data usually contains different types of information, such as text, images, audio, etc. The attention mechanism can help the model automatically learn the level of attention between different modalities and determine how much each modality contributes to the sentiment analysis [13]. This can better utilize the information of different modalities and improve the accuracy and performance of sentiment analysis.…”
Section: Related Work 21 Sentiment Analysis Of Multimodal Datamentioning
confidence: 99%
“…The video data is divided into speech and image data, and the emotional features of the two are extracted respectively to obtain the expression recognition model of a single feature. Then, the two models are fused to obtain a multimodal integrated expression recognition model (Siddiqui et al, 2022). Figure 1 shows the specific process.…”
Section: Multimodal Emotion Recognitionmentioning
confidence: 99%
“…These unimodal datasets have significantly contributed to advancing emotion recognition models within their respective modalities. However, it is important to acknowledge that unimodal datasets may not cover the complete spectrum of emotions expressed through multiple modalities [10]. To overcome this limitation, using multimodal datasets has become crucial, as they provide a comprehensive understanding of emotions.…”
Section: Introductionmentioning
confidence: 99%