2018
DOI: 10.48550/arxiv.1811.12649
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Classification is a Strong Baseline for Deep Metric Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 46 publications
(55 citation statements)
references
References 0 publications
0
55
0
Order By: Relevance
“…In most cases, the training process consists of multiplying weight matrix with embedding vectors to obtain logits, and then applying a certain loss function to the logits. The most straightforward one is the normalized softmax loss [90,141,165]. It is identical with the cross entropy loss with L2-normalized columns of the weight matrix.…”
Section: Supervision For Metric Learning A) Full Supervisionmentioning
confidence: 99%
“…In most cases, the training process consists of multiplying weight matrix with embedding vectors to obtain logits, and then applying a certain loss function to the logits. The most straightforward one is the normalized softmax loss [90,141,165]. It is identical with the cross entropy loss with L2-normalized columns of the weight matrix.…”
Section: Supervision For Metric Learning A) Full Supervisionmentioning
confidence: 99%
“…The most widely used classification loss function, softmax loss, has been revalued as a competitive objective function in metric learning [48,2]. The softmax loss is used to optimize the network f and class weight W :…”
Section: Preliminarymentioning
confidence: 99%
“…Recall@K (%) For CUB200 and CARS196, cropped images with bounding box information are used. We follow the same training and test split as [9,24,60] for fair comparisons.…”
Section: Trickmentioning
confidence: 99%