2011
DOI: 10.1007/978-3-642-21257-4_62
|View full text |Cite
|
Sign up to set email alerts
|

An Online Metric Learning Approach through Margin Maximization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2012
2012
2013
2013

Publication Types

Select...
2
1
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 4 publications
0
6
0
Order By: Relevance
“…Alternative formulations of this idea have already been proposed and studied [10], [11], [15]. As the purpose of the present work is to compare batch and online formulations, only the case of quadratic penalties has been considered.…”
Section: B Online Formulationmentioning
confidence: 99%
See 1 more Smart Citation
“…Alternative formulations of this idea have already been proposed and studied [10], [11], [15]. As the purpose of the present work is to compare batch and online formulations, only the case of quadratic penalties has been considered.…”
Section: B Online Formulationmentioning
confidence: 99%
“…Instead of considering constrained optimization using all available instances, the DML problem can be solved in a more convenient way both from the point of view of computation and robustness by using an online learning approach [10], [15]. At each time step, k, a new optimization problem is formulated and solved using a single particular instance (a labeled pair, (x i , x j ), and their corresponding labels, (c i , c j )), that is made available to the system.…”
Section: B Online Formulationmentioning
confidence: 99%
“…Instead of considering constrained optimization using all available instances, the DML problem can be solved in a more convenient way both from the point of view of computation and robustness by using an online learning approach [10,17].…”
Section: Online Formulation Using Marginsmentioning
confidence: 99%
“…The set is given to the algorithm at least twice and then this process is repeated until a maximum number of iterations has been reached. This maximum number of iterations has been established as the minimum between 20 % of all possible pairs of training samples (as in previous studies [17]) and 50 times the number of pairs in P (which is by far more than enough for larger databases with small number of classes). This means that at least t steps are executed, with t = min(…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Instead of considering constrained optimization using all information (examples) available, the above problem can be solved in a more convenient way both from the point of view of computation and robustness by using an online learning approach [4,8] . Under this sequential scheme, at each step k, a particular model formed by the pair (M k , b k ) is available to make a prediction over the labeled pair t k = (x i , x j , y ij ) which is revealed to the system at this step.…”
Section: The Separable Casementioning
confidence: 99%