2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01042
|View full text |Cite
|
Sign up to set email alerts
|

Weakly Supervised Representation Learning with Coarse Labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(39 citation statements)
references
References 17 publications
0
39
0
Order By: Relevance
“…The CBAT can filter out the unreliable target by setting a threshold. Unlike the fixed threshold in [11], [48], the CBAT can balance the diversity between different categories and adjusts adaptively by itself. Generally speaking, the fixed threshold ignores the variance in different classes.…”
Section: Class-balanced Adaptive Thresholdmentioning
confidence: 99%
“…The CBAT can filter out the unreliable target by setting a threshold. Unlike the fixed threshold in [11], [48], the CBAT can balance the diversity between different categories and adjusts adaptively by itself. Generally speaking, the fixed threshold ignores the variance in different classes.…”
Section: Class-balanced Adaptive Thresholdmentioning
confidence: 99%
“…Though not directly related to our work, we note that label granularity has been studied in many contexts other than object localization, including action recognition [46], knowledge tracing [14], animal face alignment [26], and fashion attribute recognition [23]. In the context of image classification, prior work has tackled topics like analyzing the emergence of hierarchical structure in trained classifiers [5], identifying patterns in visual concept generalization [44], and training finer-grained image classifiers using only coarse-grained labels [48,42,58,50].…”
Section: Related Workmentioning
confidence: 99%
“…Judging from the fine-class stage (Fig. 1 middle to right), if we combine a pre-training set and the support set as a holistic training set, then the few-shot fine-grained recognition using a model pre-trained on coarse samples are similar to the weakly-supervised learning and specifically learning from coarse labels [10,5,11,12], e.g., C2FS [10]. Ristel et.…”
Section: Weakly-supervised Learningmentioning
confidence: 99%
“…We also use angular normalization [10] to improve their synergy. Note that m, n index samples, τ is a temperature parameter, k − m denotes the intermediate output of the m-th sample, a negative sample, in the same class with the n-th sample, a positive sample, so as to capture intra-class cues (fine cues), and reduce unnecessary noises to the subsequent fine-grained classification [11]. L Con will be small when q n is similar with k + n and different from k − m .…”
Section: Learning Embedding-weights Contrastivelymentioning
confidence: 99%