2020 25th International Conference on Pattern Recognition (ICPR) 2021
DOI: 10.1109/icpr48806.2021.9412121
|View full text |Cite
|
Sign up to set email alerts
|

DAIL: Dataset-Aware and Invariant Learning for Face Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…Since the class distribution of the training set is usually imbalanced, existing algorithms in long-tailed learning attempt to improve the generalization ability of DNN towards the medium and tail classes. Generally, the framework of existing methods can be summarized into four categories: Resampling (Chawla et al 2002;Buda, Maki, and Mazurowski 2018) conducts under-sampling on head classes or oversampling on tail classes to balance the training data; Reweighting (Khan et al 2017;Cui et al 2019) adjusts the class weight or sample weight to maintain a balanced training process; Ensemble learning (Wang et al 2020;Cai, Wang, and Hwang 2021;Zhang et al 2022) is based on multiple experts (classifier heads) to enhance the representation learning; Loss modification (Cao et al 2019;Hong et al 2021) modifies the logit value by margins in either the training or inference stage and the typical work "logit adjustment" (Menon et al 2020) which has been proved as the Bayes-optimal solution of the long-tailed problem. Nonetheless, the current approaches are predominantly tailored for the fully-supervised context, rendering them unsuitable for semi-supervised learning where the class distribution is unspecified.…”
Section: Long-tailed Learningmentioning
confidence: 99%
“…Since the class distribution of the training set is usually imbalanced, existing algorithms in long-tailed learning attempt to improve the generalization ability of DNN towards the medium and tail classes. Generally, the framework of existing methods can be summarized into four categories: Resampling (Chawla et al 2002;Buda, Maki, and Mazurowski 2018) conducts under-sampling on head classes or oversampling on tail classes to balance the training data; Reweighting (Khan et al 2017;Cui et al 2019) adjusts the class weight or sample weight to maintain a balanced training process; Ensemble learning (Wang et al 2020;Cai, Wang, and Hwang 2021;Zhang et al 2022) is based on multiple experts (classifier heads) to enhance the representation learning; Loss modification (Cao et al 2019;Hong et al 2021) modifies the logit value by margins in either the training or inference stage and the typical work "logit adjustment" (Menon et al 2020) which has been proved as the Bayes-optimal solution of the long-tailed problem. Nonetheless, the current approaches are predominantly tailored for the fully-supervised context, rendering them unsuitable for semi-supervised learning where the class distribution is unspecified.…”
Section: Long-tailed Learningmentioning
confidence: 99%
“…The prevalent domain adaptation techniques for face recognition (FR) mainly fall into unsupervised domain adaptation (UDA) [16][17][18][19] . UDA suggests that global alignment between domains would induce per-class alignment, but massive training data must be compelling.…”
Section: Introductionmentioning
confidence: 99%