Proceedings of the 2020 International Conference on Multimodal Interaction 2020
DOI: 10.1145/3382507.3417973
|View full text |Cite
|
Sign up to set email alerts
|

EmotiW 2020: Driver Gaze, Group Emotion, Student Engagement and Physiological Signal based Challenges

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
48
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 86 publications
(48 citation statements)
references
References 14 publications
0
48
0
Order By: Relevance
“…All the experiments were conducted on an adapted version of the "Video-level Group AFfect" (VGAF) dataset [19] for the EmotiW 2020 AV Group-level sub-challenge [4]. The VGAF is a video-based database that contains labels for emotion and cohesion.…”
Section: A Datamentioning
confidence: 99%
See 2 more Smart Citations
“…All the experiments were conducted on an adapted version of the "Video-level Group AFfect" (VGAF) dataset [19] for the EmotiW 2020 AV Group-level sub-challenge [4]. The VGAF is a video-based database that contains labels for emotion and cohesion.…”
Section: A Datamentioning
confidence: 99%
“…The authors wish to acknowledge the EmotiW 2020 Grand Challenge [4] organizers and the authors of the MMIT dataset and pretrained models used in this work.…”
Section: Acknowledgmentsmentioning
confidence: 99%
See 1 more Smart Citation
“…With this in mind, research into the fusion of physiological signals for use with perceived emotional signals is limited, and within this contribution we suggest that there are potentially two benefits to this (1) where agreement between raters is lower, replacing less reliable raters with a physiological signal may improve agreement (2) where only a small number of raters are available, adding a physiological signal to the gold standard may also be fruitful . Physiological signals have been utilised as features [12], or extracted during particular tasks, to better target arousal [13], however there has been minimal research on a combined physiological and perceived arousal gold standard. Recently, in the 2021 edition of the Multimodal Sentiment in-the-wild (MuSe) challenge, the signal of arousal was fused with 𝐸𝐷𝐴 and used as a prediction target for the MuSe-Physio sub-challenge [14].…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we focus on driver gaze estimation of nine coarse regions in an 'in the wild' setting. We work with the Driver Gaze in the Wild (DGW) dataset [11] released as part of the eighth Emotion Recognition in the Wild Challenge (EmotiW) [1]. It exhibits several modelling challenges, such as diverse ethnic background among the subjects, varying illumination, and potential presence of reflections in the face and environment.…”
Section: Introductionmentioning
confidence: 99%