2020
DOI: 10.1007/978-3-030-58574-7_8
|View full text |Cite
|
Sign up to set email alerts
|

Embedding Propagation: Smoother Manifold for Few-Shot Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
97
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 147 publications
(97 citation statements)
references
References 27 publications
0
97
0
Order By: Relevance
“…1. Metric learning methods (i.e., MatchingNets, 113 ProtoNets, 114 RelationNets, 115 Graph neural network (GraphNN), 116 Ridge regression, 117 TransductiveProp, 118 Fine-tuning Baseline, 119 URT, 120 DSN-MR, 121 CDFS, 122 DeepEMD, 123 EPNet, 124 ACC + Amphibian, 125 FEAT, 126 T A B L E 3 (Continued)…”
Section: Discussion About Different Meta-learningsmentioning
confidence: 99%
See 1 more Smart Citation
“…1. Metric learning methods (i.e., MatchingNets, 113 ProtoNets, 114 RelationNets, 115 Graph neural network (GraphNN), 116 Ridge regression, 117 TransductiveProp, 118 Fine-tuning Baseline, 119 URT, 120 DSN-MR, 121 CDFS, 122 DeepEMD, 123 EPNet, 124 ACC + Amphibian, 125 FEAT, 126 T A B L E 3 (Continued)…”
Section: Discussion About Different Meta-learningsmentioning
confidence: 99%
“…We can divide meta‐learning methods into three categories 140 : Metric learning methods (i.e., MatchingNets, 113 ProtoNets, 114 RelationNets, 115 Graph neural network (GraphNN), 116 Ridge regression, 117 TransductiveProp, 118 Fine‐tuning Baseline, 119 URT, 120 DSN‐MR, 121 CDFS, 122 DeepEMD, 123 EPNet, 124 ACC + Amphibian, 125 FEAT, 126 MsSoSN + SS + SD + DD, 127 RFS, 128 RFS + CRAT, 129 IDA, 130 LR + ICI, 131 FEAT + MLMT, 132 BOHB, 133 CSPN, 134 SUR, 135 SKD, 136 TAFSSL, 137 TRPN, 138 and TransMatch 139 ) learn a similarity space in which learning is particularly efficient for few‐shot examples. Memory network methods (i.e., Meta Networks, 103 TADAM, 104 MCFS, 105 and MRN 106 ) learn to store “experience” when learning seen tasks and then generalize it to unseen tasks. …”
Section: Methodsmentioning
confidence: 99%
“…A benchmark on miniImageNet [52] and tieredImageNet [53] shows superior performance compared to other state-of-the-art FSL algorithms especially using zero to five shots, which means it works especially well if only few labels are available. However, a typical issue with FSL is that the training and test samples are disjoint [18]. This causes the feature extractor of a TPN to produce embeddings that are seemingly uncorrelated for unseen classes.…”
Section: Missing Labelsmentioning
confidence: 99%
“…This manifests as a disadvantage in terms of robustness when the TPN tries to propagate the labels during graph construction. The Embedding Propagation Network (EPNet) [18] addresses this shortcoming of TPNs by applying the propagation at embedding creation time, thus locating an image's embedding close to images with similar features in embedding space, resulting in closer labels in their respective space. EPNet achieves superior performance over the TPN architecture in one-and five-shot benchmarking.…”
Section: Missing Labelsmentioning
confidence: 99%
“…These methods learn a support vector based on the feature representation of training images in order to create a cluster for each label; queries are labeled based on the label of their nearby cluster. Label propagation [97] and embedding propagation [131] are also other techniques researchers are using to address the few-shot learning problem.…”
Section: Experiments and Resultsmentioning
confidence: 99%