2021
DOI: 10.48550/arxiv.2111.12050
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning

Abstract: Pairwise learning refers to learning tasks where the loss function depends on a pair of instances. It instantiates many important machine learning tasks such as bipartite ranking and metric learning. A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue. In this paper, we propose simple sto… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…[25]) et al During the past decades, various methods for pairwise learning have been proposed, the performance of the kernel-based regularized pairwise learning models has been extensively studied (see e.g. [20,24,33,34,46,48,54]). For instance, [15] researches the learning rates of a pairwise ranking model stated in the framework of misranking loss and empirical risk minimization, [37] extends the result to general convex loss functions, and later, [59] improves the learning rates for regularized least squares ranking algorithm.…”
mentioning
confidence: 99%
“…[25]) et al During the past decades, various methods for pairwise learning have been proposed, the performance of the kernel-based regularized pairwise learning models has been extensively studied (see e.g. [20,24,33,34,46,48,54]). For instance, [15] researches the learning rates of a pairwise ranking model stated in the framework of misranking loss and empirical risk minimization, [37] extends the result to general convex loss functions, and later, [59] improves the learning rates for regularized least squares ranking algorithm.…”
mentioning
confidence: 99%