2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00075
|View full text |Cite
|
Sign up to set email alerts
|

Parametric Contrastive Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
121
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 182 publications
(121 citation statements)
references
References 25 publications
0
121
0
Order By: Relevance
“…They show that transforming the classification head, as opposed to re-training it, performs better. PaCo [7], the current state-of-the-art for longtail classification, combines learnable logit adjustment with contrastive learning [40]. Despite their simplicity, adjusted logit methods (LACE, LDAM, LADE) remain strong solutions to the long tail problem, typically achieving within 1-2% top-1 accuracy of state-of-the-art ensemble approaches (see Table 1, Table 2).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…They show that transforming the classification head, as opposed to re-training it, performs better. PaCo [7], the current state-of-the-art for longtail classification, combines learnable logit adjustment with contrastive learning [40]. Despite their simplicity, adjusted logit methods (LACE, LDAM, LADE) remain strong solutions to the long tail problem, typically achieving within 1-2% top-1 accuracy of state-of-the-art ensemble approaches (see Table 1, Table 2).…”
Section: Related Workmentioning
confidence: 99%
“…Base approaches are largely variants of the same core idea-that of "adjustment", where the learner is encouraged to focus on the tail of the distribution. This can be achieved implicitly, via over/under-weighting samples during training [3,13,20,23] or cluster-based sampling [7], or explicitly via logit [11,38,58] or loss [22,38] modification. Such approaches largely focus on consistency, ensuring minimizing the training loss corresponds to a minimal error on the known, balanced, test distribution.…”
Section: Introductionmentioning
confidence: 99%
“…Zhang et al [33] and Hong et al [14] propose recalibration and postprocessing strategies to address the label distribution shift between training and test datasets. Zhong et al [35] rely on label smoothing and mixup strategies, while Cui et al [7] combine cross-entropy with supervised contrastive learning, achieving stronger representations at the cost of more expensive training. Finally, closest to our work, Samuel et al [26] propose to rely on a prototype-based auxiliary loss, where class prototypes are computed at the beginning of every epoch during training.…”
Section: Related Workmentioning
confidence: 99%
“…Our work focuses on this issue, and is not tied to a rebalancing strategy. Therefore, it provides complementary benefits to approaches that focus on backbone training improvements [7,24,26], as discussed above.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation