2020
DOI: 10.1109/access.2020.2981760
|View full text |Cite
|
Sign up to set email alerts
|

Outlier Processing in Multimodal Emotion Recognition

Abstract: Automatic emotion recognition plays a key role in human-computer interactions. Multimodal emotion recognition has attracted much attention in recent years. When multimodalities are used, different modalities interact with each other and the obtained results tend to be accurate in general. However, there are also cases of unimodal anomalies. Most of the existing studies do not take into account the existence of outliers in the multimodality, which leads to low accuracy of the prediction results. This paper prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 49 publications
0
5
0
Order By: Relevance
“…For example, RECOLA hired 6 assistants to annotate each frame, and our proposed system can achieve UAR (Arousal)=61.9% and UAR (Valence)=59.7%. LIRIS-ACCEDE, by which Yi et al [56] achieved UAR=45.63/38.20(valence/arousal) and Zhang et al [54] achieved CCC=91.8/94.6 (valence/arousal), takes pairwise comparisons, rather than rating approaches. From each pair of video excerpts, three annotators have to identify the one which can convey most strongly the given emotion in terms of valence or arousal.…”
Section: Discussionmentioning
confidence: 99%
“…For example, RECOLA hired 6 assistants to annotate each frame, and our proposed system can achieve UAR (Arousal)=61.9% and UAR (Valence)=59.7%. LIRIS-ACCEDE, by which Yi et al [56] achieved UAR=45.63/38.20(valence/arousal) and Zhang et al [54] achieved CCC=91.8/94.6 (valence/arousal), takes pairwise comparisons, rather than rating approaches. From each pair of video excerpts, three annotators have to identify the one which can convey most strongly the given emotion in terms of valence or arousal.…”
Section: Discussionmentioning
confidence: 99%
“…To date, there are few works in the literature investigating the intersection between smart manufacturing and emotion recognition, and these are mainly related to improving human-machine interaction [14,74]. On the other hand, it is not possible to identify work that exploits modelling and emotion recognition as a design technique for customizable and innovative objects.…”
Section: Related Workmentioning
confidence: 99%
“…The introduction of affective factors to HCIs resulted in developing an interdisciplinary research field, often called affective computing, which attempts to develop human-aware AI that can perceive, understand, and regulate emotions [ 112 ]. Once computers understand humans’ emotions, AI will rise to a new level [ 113 ]. In the HCSII research field, there is an increasing focus on developing emotional AI in HCI since emotion recognition using AI is a fundamental prerequisite to improve HCI [ 106 ].…”
Section: Backgrounds and Related Workmentioning
confidence: 99%