2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00419
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Subspaces for Few-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
197
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 367 publications
(206 citation statements)
references
References 24 publications
0
197
0
Order By: Relevance
“…ProtoNet uses Euclidean distance while RelationNet compares an embedding f ϕ and query samples using an additional parameterized CNN-based 'relation module'. MetaOptNet [48] and DSN-MR [49] are also metric-based approaches. MetaOptNet provides an end-to-end method with regularized linear classifiers i.e., ridge regression and SVM.…”
Section: Results and Comparisonsmentioning
confidence: 99%
See 3 more Smart Citations
“…ProtoNet uses Euclidean distance while RelationNet compares an embedding f ϕ and query samples using an additional parameterized CNN-based 'relation module'. MetaOptNet [48] and DSN-MR [49] are also metric-based approaches. MetaOptNet provides an end-to-end method with regularized linear classifiers i.e., ridge regression and SVM.…”
Section: Results and Comparisonsmentioning
confidence: 99%
“…The linear classifiers can learn better class boundaries using negative examples at a modest increase in computational costs. Simon et al [49] observed that high-order information is preferred over low-order to improve the classifier's capability in the low data regime; hence one hopes a subspace method can form a robust classifier. The authors develop a dynamic classifier that computes a subspace of feature space for each category, and the features of query samples are projected into the subspace for comparison.…”
Section: Few-shot Classification Via Meta-learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Most of the early FSL methods [ 38 , 40 , 42 , 45 ] utilized a four-layer convolutional network (Conv-4) as the embedding backbone, while more recent models found that such a shallow embedding network might lead to underfitting. In this work, we take ResNet-12, the most popular backbone in current FSL literature [ 51 , 52 , 59 ], as our embedding network. As illustrated in Figure 4 , the ResNet-12 is a smaller version of the ResNet [ 9 ], containing four residual blocks and generates 512-dimensional embeddings after a global average pooling (GAP).…”
Section: Methodsmentioning
confidence: 99%