2020
DOI: 10.1007/978-3-030-58452-8_43
|View full text |Cite
|
Sign up to set email alerts
|

Prototype Rectification for Few-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
98
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 186 publications
(98 citation statements)
references
References 13 publications
0
98
0
Order By: Relevance
“…[5] proposed transductive fine-tuning, which pursues outputs with a peaked posterior or low Shannon entropy, and a hardness metric to deliver a standardized evaluation protocol. [26] proposed the prototype rectification, which lowers the class prototype's intra-class bias and cross-class bias and verifies the method theoretically. A synthetic information bottleneck (SIB) [12] introduced an empirical Bayes approach and a two-network architecture consisting of a synthetic gradient network and an initialization network to perform the synthetic gradient descent.…”
Section: Related Workmentioning
confidence: 69%
See 2 more Smart Citations
“…[5] proposed transductive fine-tuning, which pursues outputs with a peaked posterior or low Shannon entropy, and a hardness metric to deliver a standardized evaluation protocol. [26] proposed the prototype rectification, which lowers the class prototype's intra-class bias and cross-class bias and verifies the method theoretically. A synthetic information bottleneck (SIB) [12] introduced an empirical Bayes approach and a two-network architecture consisting of a synthetic gradient network and an initialization network to perform the synthetic gradient descent.…”
Section: Related Workmentioning
confidence: 69%
“…We mainly investigated the alternating direction method (ADM) version of the TIM algorithm, which is faster than the gradient descent (GD) version 2 . We have added a prototype estimation technique [26], [59] to TIM. This further improved the 1-shot classification accuracy.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We can divide meta‐learning methods into three categories 140 : Metric learning methods (i.e., MatchingNets, 113 ProtoNets, 114 RelationNets, 115 Graph neural network (GraphNN), 116 Ridge regression, 117 TransductiveProp, 118 Fine‐tuning Baseline, 119 URT, 120 DSN‐MR, 121 CDFS, 122 DeepEMD, 123 EPNet, 124 ACC + Amphibian, 125 FEAT, 126 MsSoSN + SS + SD + DD, 127 RFS, 128 RFS + CRAT, 129 IDA, 130 LR + ICI, 131 FEAT + MLMT, 132 BOHB, 133 CSPN, 134 SUR, 135 SKD, 136 TAFSSL, 137 TRPN, 138 and TransMatch 139 ) learn a similarity space in which learning is particularly efficient for few‐shot examples. Memory network methods (i.e., Meta Networks, 103 TADAM, 104 MCFS, 105 and MRN 106 ) learn to store “experience” when learning seen tasks and then generalize it to unseen tasks. …”
Section: Methodsmentioning
confidence: 99%
“…MsSoSN + SS + SD + DD, 127 RFS, 128 RFS + CRAT, 129 IDA, 130 LR + ICI, 131 FEAT + MLMT, 132 BOHB, 133 CSPN, 134 SUR, 135 SKD, 136 TAFSSL, 137 TRPN, 138 and TransMatch 139 ) learn a similarity space in which learning is particularly efficient for few-shot examples. 2.…”
mentioning
confidence: 99%