Proceedings of the 30th ACM International Conference on Multimedia 2022
DOI: 10.1145/3503161.3547960
|View full text |Cite
|
Sign up to set email alerts
|

Self-Paced Label Distribution Learning for In-The-Wild Facial Expression Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…RAF-DB dataset is one of the most widely used large-scale realworld FER datasets because it facilitates fair comparisons, in which all images are cropped and do not require any additional preprocessing. Results show that our FaceFormer achieves state-of-the-art performance compared to all other methods, including FER with unconstrained variations (RAN [11], MA-Net [13], IPD-FER [58]), and FER with annotation ambiguity (SCN [30], DMUE [28], KTN [59], EfficientFace [26], SPLDL [29], EASE [32]). In particular, when compared to TransFER [23], the previous best achieved by combining CNN and ViT, FER-former lowers the error rate from 9.09% to 8.7%, a 4.3% improvement.…”
Section: A Comparison With State-of-the-art Methodsmentioning
confidence: 98%
See 3 more Smart Citations
“…RAF-DB dataset is one of the most widely used large-scale realworld FER datasets because it facilitates fair comparisons, in which all images are cropped and do not require any additional preprocessing. Results show that our FaceFormer achieves state-of-the-art performance compared to all other methods, including FER with unconstrained variations (RAN [11], MA-Net [13], IPD-FER [58]), and FER with annotation ambiguity (SCN [30], DMUE [28], KTN [59], EfficientFace [26], SPLDL [29], EASE [32]). In particular, when compared to TransFER [23], the previous best achieved by combining CNN and ViT, FER-former lowers the error rate from 9.09% to 8.7%, a 4.3% improvement.…”
Section: A Comparison With State-of-the-art Methodsmentioning
confidence: 98%
“…RAN [11] 2020 86.90 SCN [30] 2020 88.14 DLN [60] 2021 86.40 KTN [59] 2021 88.07 MA-Net [13] 2021 88.40 DMUE [28] 2021 89.42 EfficientFace [26] 2021 88.36 TransFER [23] 2021 90.91 IPD-FER [58] 2022 88.89 CRS-CONT [61] 2022 88.07 SPLDL [29] 2022 89.08 EASE [32] 2022 89.56 FER-former (Ours) 2023 91.30…”
Section: Methods Years Acc(%)mentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, data-driven methods will undoubtedly suffer from disturbances and get stuck into bad local minima during training, thereby remarkably degrading the performance. To tackle this problem, self-paced learning (SPL) (Kumar, Packer, and Koller 2010;Shao et al 2022;Huang et al 2021;Pan et al 2020) was proposed to train the model from 'easy' to 'hard' samples inspired by human cognitive learning, which has been proven to be beneficial in alleviating the noise/outlier problem (Li et al 2021). However, almost all CMH methods treat all instances and features equally during learning hash codes, while ignoring the difficulty differences caused by noise or outliers.…”
Section: Introductionmentioning
confidence: 99%