2023
DOI: 10.1007/978-3-031-26348-4_4
|View full text |Cite
|
Sign up to set email alerts
|

Few-shot Metric Learning: Online Adaptation of Embedding for Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Meta learning [50] is also introduced for metric learning [51], where the training set is split into multiple subsets and a meta metric is learned across the different subsets (tasks). Recently, a few-shot metric learning approach [29] is designed to adapt the metric space by rectifying channels of intermediate layers, and a universal metric learning method is proposed in [30] by utilizing prompt learning.…”
Section: Tml Via Metric Approximationmentioning
confidence: 99%
“…Meta learning [50] is also introduced for metric learning [51], where the training set is split into multiple subsets and a meta metric is learned across the different subsets (tasks). Recently, a few-shot metric learning approach [29] is designed to adapt the metric space by rectifying channels of intermediate layers, and a universal metric learning method is proposed in [30] by utilizing prompt learning.…”
Section: Tml Via Metric Approximationmentioning
confidence: 99%
“…To encourage G att to generate more representative semantic embedding for an unseen class, we used the idea of building a distance metric in metric learning. Metric learning aims to learn such a distance metric for a type of input data that conforms to semantic distance measures between the data instances [35]; this point has been explored and applied in both few-shot learning [35] and zero-shot learning [36,37]. Inspired by previous work,we propose the semantic-relevant self-adaptive margin center loss (SEMC − loss, L SEMC ) to constraint G att .…”
Section: Loss Design 231 Semantic-relevant Self-adaptive Margin Cente...mentioning
confidence: 99%