2018
DOI: 10.1016/j.knosys.2018.01.026
|View full text |Cite
|
Sign up to set email alerts
|

Constraint selection in metric learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
13
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 24 publications
0
13
0
Order By: Relevance
“…For example, considering a two class problem having 100 instances in each class, the number of possible triplets is 1, 980, 000. Since dealing with a huge number of triplets causes prohibitive computations, a small subset of triplets are sometimes used in practice (e.g., [10]) though the optimality of such a sub-sampling strategy is not clearly understood. Our safe triplet screening enables the identification of triplets which can be safely removed from the optimization problem without losing the optimality of the resulting metric.…”
Section: Introductionmentioning
confidence: 99%
“…For example, considering a two class problem having 100 instances in each class, the number of possible triplets is 1, 980, 000. Since dealing with a huge number of triplets causes prohibitive computations, a small subset of triplets are sometimes used in practice (e.g., [10]) though the optimality of such a sub-sampling strategy is not clearly understood. Our safe triplet screening enables the identification of triplets which can be safely removed from the optimization problem without losing the optimality of the resulting metric.…”
Section: Introductionmentioning
confidence: 99%
“…The Manhattan distance is used instead of the Euclidean distance. We do not need to consider the absolute values of w (t,k) because the weights are restricted by condition (5) such that their values are non-negative. Moreover, we rewrite the second term in the following form:…”
Section: The Disdf Training and Testingmentioning
confidence: 99%
“…The numerical experiments illustrate the proposed distance metric algorithm. of supervised distance metric learning is cast into pairwise constraints: the equivalence constraints where pairs of data points that belong to the same classes, and inequivalence constraints where pairs of data points belong to different classes.Metric learning approaches were reviewed in [1,5,14,29]. The basic idea underlying the metric learning solution is that the distance between similar objects should be smaller than the distance between different objects.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…It is pointed out by Bellet et al [1] in their review paper that the metric learning aims to adapt the pairwise real-valued metric function, for example, the Mahalanobis distance or the Euclidean distance, to a problem of interest using the information provided by training data. A detailed description of the metric learning approaches is also represented by Le Capitaine [5] and by Kulis [14]. The basic idea underlying the metric learning solution is that the distance between similar objects should be smaller than the distance between different objects.…”
Section: Introductionmentioning
confidence: 99%