2022
DOI: 10.1109/tnnls.2021.3084733
|View full text |Cite
|
Sign up to set email alerts
|

Consistent Meta-Regularization for Better Meta-Knowledge in Few-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…To evaluate the effectiveness of TPFSL, we compared its performance against seven different benchmarks, including MCFSL [30], SUFSL [46], MAMIFSL [47], DCRFSL [35], IFSL [48], MK-FSL [49], and ProtoNet [50].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To evaluate the effectiveness of TPFSL, we compared its performance against seven different benchmarks, including MCFSL [30], SUFSL [46], MAMIFSL [47], DCRFSL [35], IFSL [48], MK-FSL [49], and ProtoNet [50].…”
Section: Methodsmentioning
confidence: 99%
“…ProtoNet [50] 65.60 ± 0.98 83.60 ± 0.68 IFSL [48] 69.84 ± 1.01 84.09 ± 0.68 MAMIFSL [47] 69.99 ± 0.99 84.44 ± 0.67 MKFSL [49] 79.15 ± 0.99 89.23 ± 0.68 DCRFSL [35] 79.17 ± 0.98 89.76 ± 0.68 MCFSL [30] 79.76 ± 0.99 89.05 ± 0.67 SUFSL [46] 76 [48] 67.88 ± 1.07 84.28 ± 0.87 MAMIFSL [47] 68.33 ± 1.09 84.81 ± 0.87 MKFSL [49] 68.15 ± 1.08 84.12 ± 0.87 DCRFSL [35] 68.78 ± 1.08 84.83 ± 0.87 MCFSL [30] 67.69 ± 1.07 83.68 ± 0.87 SUFSL [46] 64. 28 [48] 64.94 ± 1.00 81.02 ± 0.76 MAMIFSL [47] 64.77 ± 1.00 80.12 ± 0.76 MKFSL [49] 65.07 ± 1.00 81.26 ± 0.76 DCRFSL [35] 64.98 ± 1.00 80.46 ± 0.76 MCFSL [30] 64.78 ± 1.00 80.87 ± 0.76 SUFSL [46] 67.07 ± 1.00 87. -Euclidean distance: Euclidean distance is a prevalent distance metric utilized in FSL.…”
Section: Sarscov2mentioning
confidence: 99%
“…Moreover, a concept discriminator was designed to recognize different images. Tian et al [125] proposed a new consistent meta-regularization (Con-MetaReg) to enhance the learning ability of meta-learning models. Specifically, a base learner trained on the support set, then another learner trained on a novel query set.…”
Section: Optimization-based Task-specific Feature Representation Lear...mentioning
confidence: 99%
“…While ANIL offers a streamlined architecture and inherent computational advantages, it faces significant challenges that hinder its flexibility and generalizability in the context of meta-learning. These obstacles may include limitations in handling diverse and complex data types, difficulties in adapting to new and unseen scenarios with fewer data, and challenges in effectively transferring knowledge across different domains [5].…”
Section: Introductionmentioning
confidence: 99%