Proceedings of the 2018 on Audio/Visual Emotion Challenge and Workshop 2018
DOI: 10.1145/3266302.3266307
|View full text |Cite
|
Sign up to set email alerts
|

Towards a Better Gold Standard

Abstract: Emotions are often perceived by humans through a series of multimodal cues, such as verbal expressions, facial expressions and gestures. In order to recognise emotions automatically, reliable emotional labels are required to learn a mapping from human expressions to corresponding emotions. Dimensional emotion models have become popular and have been widely applied for annotating emotions continuously in the time domain. However, the statistical relationship between emotional dimensions is rarely studied. This … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…Understanding the target distribution can also help improve the reliability of annotations. Wang et al [58] explore the distribution of emotion annotations using outlier detection methods and use these insights to correct outliers toward the learned distribution, reducing labeling noise and outperforming previous SOTA results. Escalante et al.…”
Section: A Eda: Understanding Multimodal Affective Datasetsmentioning
confidence: 99%
“…Understanding the target distribution can also help improve the reliability of annotations. Wang et al [58] explore the distribution of emotion annotations using outlier detection methods and use these insights to correct outliers toward the learned distribution, reducing labeling noise and outperforming previous SOTA results. Escalante et al.…”
Section: A Eda: Understanding Multimodal Affective Datasetsmentioning
confidence: 99%
“…Ringeval et al used median filtering with a window width of three samples before creating a single gold standard using EWE [21]. In a 2018 emotion challenge aiming at improving the gold-standard [23], Wang et al argued that secondary, slight errors in annotations can be removed by a moving average filter [24]. While testing three different filtering techniques to smooth annotation data (Savitzky-Golay filter, moving average filter, and median filter) Thammasan et al found that the moving average filter is more practical to enhance emotion recognition performance [25].…”
Section: Introductionmentioning
confidence: 99%