2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00883
|View full text |Cite
|
Sign up to set email alerts
|

Few-Shot Learning via Embedding Adaptation With Set-to-Set Functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
287
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 534 publications
(288 citation statements)
references
References 15 publications
1
287
0
Order By: Relevance
“…1. Metric learning methods (i.e., MatchingNets, 113 ProtoNets, 114 RelationNets, 115 Graph neural network (GraphNN), 116 Ridge regression, 117 TransductiveProp, 118 Fine-tuning Baseline, 119 URT, 120 DSN-MR, 121 CDFS, 122 DeepEMD, 123 EPNet, 124 ACC + Amphibian, 125 FEAT, 126 T A B L E 3 (Continued)…”
Section: Discussion About Different Meta-learningsmentioning
confidence: 99%
See 1 more Smart Citation
“…1. Metric learning methods (i.e., MatchingNets, 113 ProtoNets, 114 RelationNets, 115 Graph neural network (GraphNN), 116 Ridge regression, 117 TransductiveProp, 118 Fine-tuning Baseline, 119 URT, 120 DSN-MR, 121 CDFS, 122 DeepEMD, 123 EPNet, 124 ACC + Amphibian, 125 FEAT, 126 T A B L E 3 (Continued)…”
Section: Discussion About Different Meta-learningsmentioning
confidence: 99%
“…We can divide meta‐learning methods into three categories 140 : Metric learning methods (i.e., MatchingNets, 113 ProtoNets, 114 RelationNets, 115 Graph neural network (GraphNN), 116 Ridge regression, 117 TransductiveProp, 118 Fine‐tuning Baseline, 119 URT, 120 DSN‐MR, 121 CDFS, 122 DeepEMD, 123 EPNet, 124 ACC + Amphibian, 125 FEAT, 126 MsSoSN + SS + SD + DD, 127 RFS, 128 RFS + CRAT, 129 IDA, 130 LR + ICI, 131 FEAT + MLMT, 132 BOHB, 133 CSPN, 134 SUR, 135 SKD, 136 TAFSSL, 137 TRPN, 138 and TransMatch 139 ) learn a similarity space in which learning is particularly efficient for few‐shot examples. Memory network methods (i.e., Meta Networks, 103 TADAM, 104 MCFS, 105 and MRN 106 ) learn to store “experience” when learning seen tasks and then generalize it to unseen tasks. …”
Section: Methodsmentioning
confidence: 99%
“…TADAM [ 51 ] boosts the performance of ProtoNets by metric scaling, task-conditioning, and auxiliary task co-training. MetaOptNet [ 52 ] and FEAT [ 54 ] follow the same spirit of learning task-specific embeddings to ensure the features are more discriminative for a given task.…”
Section: Related Workmentioning
confidence: 99%
“…Hallucinator [47,2018] Saliency Network [48,2019] Learning task-specific features TADAM [22,2018] LGM-Net [27,2019] CTM [39,2019] FEAT [28,2020] XtarNet [40,2020] Learning via multi-task learning Self-supervised FSL [49,2019] TADAM [22,2018] Meta-learning for semi-supervised fewshot classification…”
Section: Learning With Data Augmentationmentioning
confidence: 99%