2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00019
|View full text |Cite
|
Sign up to set email alerts
|

FREE: Feature Refinement for Generalized Zero-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
59
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 148 publications
(59 citation statements)
references
References 42 publications
0
59
0
Order By: Relevance
“…They usually extract global visual features from pre-trained or end-to-end trainable networks, e.g., ResetNet [35]. Note that end-to-end models achieve better performance than pre-trained ones because they fine-tune the visual features, thus the cross-dataset bias between ImageNet and ZSL benchmarks is alleviated [8], [26].…”
Section: Zero-shot Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…They usually extract global visual features from pre-trained or end-to-end trainable networks, e.g., ResetNet [35]. Note that end-to-end models achieve better performance than pre-trained ones because they fine-tune the visual features, thus the cross-dataset bias between ImageNet and ZSL benchmarks is alleviated [8], [26].…”
Section: Zero-shot Learningmentioning
confidence: 99%
“…Since the cross-dataset bias between ImageNet and ZSL benchmarks potentially limits the quality of visual feature extraction [26], [38], we first propose a feature augmentation encoder to refine the visual features of ZSL benchmarks. In addition, previous ZSL methods simply flatten the grid features U (x) ∈ R H×W ×C (extracted by a CNN backbone) of a single image into a feature vector, which is further applied to generative models or embedding learning.…”
Section: Feature Augmentation Encodermentioning
confidence: 99%
See 3 more Smart Citations