2023
DOI: 10.1109/tip.2023.3242148
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Hierarchical Similarity Metric Learning With Noisy Labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(3 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…They also propose a hyperbolic version of Reciprocal Point Learning (Chen et al, 2020a) to provide extra-class space for known categories in the few-shot learning stage. Yan et al (2023) also explore hyperbolic metric learning, incorporating noise-insensitive and adaptive hierarchical similarity to handle noisy labels and multi-level relations. Kim et al (2022) add a hierarchical regularization term on top of the metric learning approaches, with the goal of learning hierarchical ancestors in hyperbolic space without any annotation.…”
Section: Hyperbolic Metric Learningmentioning
confidence: 99%
“…They also propose a hyperbolic version of Reciprocal Point Learning (Chen et al, 2020a) to provide extra-class space for known categories in the few-shot learning stage. Yan et al (2023) also explore hyperbolic metric learning, incorporating noise-insensitive and adaptive hierarchical similarity to handle noisy labels and multi-level relations. Kim et al (2022) add a hierarchical regularization term on top of the metric learning approaches, with the goal of learning hierarchical ancestors in hyperbolic space without any annotation.…”
Section: Hyperbolic Metric Learningmentioning
confidence: 99%
“…Self-supervised learning aims to acquire intermediate features of superior quality that can be effectively applied to various downstream tasks. The existing approaches in selfsupervised learning have predominantly revolved around contrastive method (Chen et al 2022;Yan et al 2023a). MoCo (He et al 2020) built a dynamic dictionary with a queue and a momentum encoder to enhance the training efficacy.…”
Section: Related Workmentioning
confidence: 99%
“…The core concept of sample-wise similarity learning involves learning representations for individual data samples, aiming to precisely reflect the inherent similarities between them [149,174]. This approach primarily concentrates on deciphering the inherent relationships within the data, considering each data point in relation to every other point in the sample space.…”
Section: Knowledge-guided Pre-trainingmentioning
confidence: 99%