2023
DOI: 10.1007/978-3-031-25075-0_12
|View full text |Cite
|
Sign up to set email alerts
|

ABAW: Learning from Synthetic Data & Multi-task Learning Challenges

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(25 citation statements)
references
References 44 publications
0
25
0
Order By: Relevance
“…We report results by CCC and F 1 score for the three subchallenges in table 1 on the validation set. For the facial expression classfication, the authors for the baseline [10] perform the pre-trained VGG16 network on the VGGFACE dataset and get softmax probabilities for the 8 expression predictions. In our proposed model, there are virious effective data augmentation strategies employed to alleviate the problems of sample imbalance during model training.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We report results by CCC and F 1 score for the three subchallenges in table 1 on the validation set. For the facial expression classfication, the authors for the baseline [10] perform the pre-trained VGG16 network on the VGGFACE dataset and get softmax probabilities for the 8 expression predictions. In our proposed model, there are virious effective data augmentation strategies employed to alleviate the problems of sample imbalance during model training.…”
Section: Resultsmentioning
confidence: 99%
“…al. [10][11][12][13][14][15][16][17][18][19]28] proposed Aff-Wild2 containing the above three representations in the wild. There are various challenges in this dataset, such as head poses, ages, sex, etc.…”
Section: Introductionmentioning
confidence: 99%
“…The utilization of multimodal features, including visual, audio, and text features, has been extensively employed in previous ABAW competitions (Zafeiriou et al 2017;Kollias, Sharmanska, and Zafeiriou 2019;Kollias and Zafeiriou 2021a,b;Kollias, Sharmanska, and Zafeiriou 2021;Kollias 2022Kollias , 2023Kollias et al 2023). We can improve the performance in affective behavior analysis tasks by extracting and analyzing these multimodal features.…”
Section: Related Work Multimodal Featuresmentioning
confidence: 99%
“…From a machine learning perspective, AU detection in the wild presents many technical challenges. Most notably, in-the-wild datasets such as Aff-Wild2 [12][13][14][15][16][17][18][19][20][21]32] collect data with huge variations in the cameras (resulting in blurred video frames), environments (illumination conditions), and subjects (large variance in expressions, scale, and head poses). Ertugrul et al [4,5] demonstrate that the deep-learning-based AU detectors have limited generalization abilities due to the aforementioned variations.…”
Section: Introductionmentioning
confidence: 99%