2021 IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
DOI: 10.1109/wacv48630.2021.00033
|View full text |Cite
|
Sign up to set email alerts
|

From generalized zero-shot learning to long-tail with class descriptors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(14 citation statements)
references
References 30 publications
0
14
0
Order By: Relevance
“…GZSL also can be combined with reinforcement learning [197], [210], [211] to better tackle new tasks. In addition, GZSL can be extended in several fronts, which include multimodal learning [85], [199], [200], multi-label learning [190], multi-view learning [191], weakly supervised learning to progressively incorporate training instances from easy to hard [100], [130], [212], continual learning [110], long-tail learning [213], or online learning for few-shot learning where a small portion of labeled samples from some classes are available [67].…”
Section: Research Gapsmentioning
confidence: 99%
“…GZSL also can be combined with reinforcement learning [197], [210], [211] to better tackle new tasks. In addition, GZSL can be extended in several fronts, which include multimodal learning [85], [199], [200], multi-label learning [190], multi-view learning [191], weakly supervised learning to progressively incorporate training instances from easy to hard [100], [130], [212], continual learning [110], long-tail learning [213], or online learning for few-shot learning where a small portion of labeled samples from some classes are available [67].…”
Section: Research Gapsmentioning
confidence: 99%
“…Recently, Samuel et al [28] leveraged class descriptors to facilitate long-tailed classification. It developed a dual network to derive both visual features and semantic features from the input image, and then fused these two together to boost the performance of long-tailed classification.…”
Section: Related Workmentioning
confidence: 99%
“…No label is used for the architecture search. In Table 1, we summarize the results of our experiments and compare them against the previous representative works on long-tailed distributions: ResNet-32 + Focal loss (Lin et al, 2017), ResNet-32 + Sigmoid (SGM) and Balanced Sigmoid (BSGM) Cross Entropy losses (Cui et al, 2019), LDAM-DRW (Cao et al, 2019), smDragon and VE2 + smDragon (Samuel et al, 2021), SSNAS (Kaplan & Giryes, 2020). To provide a fair comparison to SSNAS method, we implement it in a common framework with common hyper-parameters.…”
Section: Evaluation On a Long-tailed Distributionmentioning
confidence: 99%