2020
DOI: 10.48550/arxiv.2002.03238
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-Label Class Balancing Algorithm for Action Unit Detection

Abstract: Isolated facial movements, so-called Action Units, can describe combined emotions or physical states such as pain. As datasets are limited and mostly imbalanced, we present an approach incorporating a multi-label class balancing algorithm. This submission is subject to the Action Unit detection task of the Affective Behavior Analysis in-the-wild (ABAW) challenge at the IEEE Conference on Face and Gesture Recognition.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…The third ABAW Competition, held in conjunction with the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2022 is a continuation of the first 1 [29] and second 2 [37] ABAW Competitions held in conjunction with the IEEE Conference on Face and Gesture Recognition (IEEE FG) 2021 and with the International Conference on Computer Vision (ICCV) 2022, respectively, which targeted dimensional (in terms of valence and arousal) [1-3, 8, 9, 11, 24, 40, 45, 53, 54, 58, 64, 65, 67], categorical (in terms of the basic expressions) [12,15,16,38,41,42,60] and facial action unit analysis and recognition [7,18,22,30,31,46,50,53]. The third ABAW Competition contains four Challenges, which are based on the same in-the-wild database, (i) the uni-task Valence-Arousal Estimation Challenge; (ii) the uni-task Expression Classification Challenge (for the 6 basic expressions plus the neutral state plus the 'other' category that denotes expressions/affective states other than the 6 basic ones); (iii) the uni-task Action Unit Detection Challenge (for 12 action units); (iv) the Multi-Task Learning Challenge (for joint learning and predicting of valence-arousal, 8 expressions -6 basic plus neutral plus 'other'-and 12 action units).…”
Section: Introductionmentioning
confidence: 99%
“…The third ABAW Competition, held in conjunction with the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2022 is a continuation of the first 1 [29] and second 2 [37] ABAW Competitions held in conjunction with the IEEE Conference on Face and Gesture Recognition (IEEE FG) 2021 and with the International Conference on Computer Vision (ICCV) 2022, respectively, which targeted dimensional (in terms of valence and arousal) [1-3, 8, 9, 11, 24, 40, 45, 53, 54, 58, 64, 65, 67], categorical (in terms of the basic expressions) [12,15,16,38,41,42,60] and facial action unit analysis and recognition [7,18,22,30,31,46,50,53]. The third ABAW Competition contains four Challenges, which are based on the same in-the-wild database, (i) the uni-task Valence-Arousal Estimation Challenge; (ii) the uni-task Expression Classification Challenge (for the 6 basic expressions plus the neutral state plus the 'other' category that denotes expressions/affective states other than the 6 basic ones); (iii) the uni-task Action Unit Detection Challenge (for 12 action units); (iv) the Multi-Task Learning Challenge (for joint learning and predicting of valence-arousal, 8 expressions -6 basic plus neutral plus 'other'-and 12 action units).…”
Section: Introductionmentioning
confidence: 99%
“…The third ABAW Competition, to be held in conjunction with the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2022 is a continuation of the first [24] and second [32] ABAW Competitions held in conjunction with the IEEE Conference on Face and Gesture Recognition (IEEE FG) 2021 and with the International Conference on Computer Vision (ICCV) 2022, respectively, which targeted dimensional (in terms of valence and arousal) [2][3][4]8,9,11,21,35,39,47,48,50,[54][55][56], categorical (in terms of the basic expressions) [12,15,16,33,36,37,51] and facial action unit analysis and recognition [7,19,20,25,26,40,44,47]. The third ABAW Competition contains four Challenges, which are based on the same in-the-wild database, (i) the uni-task Valence-Arousal Estimation Challenge; (ii) the uni-task Expression Classification Challenge (for the 6 basic expressions plus the neutral state plus the 'other' category that denotes expressions/affective states other than the 6 basic ones); (iii) the uni-task Action Unit Detection Challenge (for 12 action units); (iv) the Multi-Task Learning Challenge (for joint learning and predicting of valence-arousal, 8 expressions -6 basic plus neutral plus 'other'-and 12 action units).…”
Section: Introductionmentioning
confidence: 99%
“…The ABAW2 Competition contains three Challenges, which are based on the same database; these target (i) dimensional affect recognition (in terms of valence and arousal) [16,35,6,22,5,41,65,39,23,3], (ii) categor-Figure 1. The 2D Valence-Arousal Space ical affect classification (in terms of the seven basic expressions) [49,9,24,66,29,15,33,45,37] and (iii) 12 facial action unit detection [50,44,28,13,27,8], in-thewild. These Challenges produce a significant step forward when compared to previous events.…”
Section: Introductionmentioning
confidence: 99%