2020
DOI: 10.1007/978-3-030-58539-6_33
|View full text |Cite
|
Sign up to set email alerts
|

A Unifying Mutual Information View of Metric Learning: Cross-Entropy vs. Pairwise Losses

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
95
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 94 publications
(99 citation statements)
references
References 29 publications
4
95
0
Order By: Relevance
“…The RSA catalogue allowed us to contrast models on a completely unseen, expertly labelled set. First, the results show improvement on the state-of-the-art in galaxy classification [11] using deep metric learning techniques [16]. These results are evidenced by improved accuracy not equating to improved, learned representations.…”
Section: Introductionmentioning
confidence: 88%
See 2 more Smart Citations
“…The RSA catalogue allowed us to contrast models on a completely unseen, expertly labelled set. First, the results show improvement on the state-of-the-art in galaxy classification [11] using deep metric learning techniques [16]. These results are evidenced by improved accuracy not equating to improved, learned representations.…”
Section: Introductionmentioning
confidence: 88%
“…This method requires large amounts of training samples similar to the ones expected when testing the model. On the other hand, metric learning proposes the idea of learning a complex, non-linear mapping [16]. This mapping maps high-dimensional input data to a lower-dimension manifold called an embedding space [16].…”
Section: E Deep Metric Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Here we present some of the basic objectives used in visual transfer learning using KG, which can be augmented with additional regularization terms or hyperparameters. Although work [13,73] showed that the objectives have a smaller impact on the learned DNN than suspected, there are configurations of visual and semantic embedding space that only allow certain objectives to be applied. We define l ∈ R K as the network's output (logits) vector, and t ∈ 0, 1 K as the one-hot encoded vector of targets, where t 1 = 1.…”
Section: Training Objectives For Joint Embeddingsmentioning
confidence: 99%
“…Compared with the variance loss function, the problem of updating weights and bias too slowly was overcome by this method. The updating of weights and deviations is affected by errors [37,38]. For this reason, when the error is large enough, the updating speed of weights is very fast.…”
Section: Cross Entropymentioning
confidence: 99%