2019
DOI: 10.1142/s0219530519400062
|View full text |Cite
|
Sign up to set email alerts
|

Stability and optimization error of stochastic gradient descent for pairwise learning

Abstract: In this paper we study the stability and its trade-off with optimization error for stochastic gradient descent (SGD) algorithms in the pairwise learning setting. Pairwise learning refers to a learning task which involves a loss function depending on pairs of instances among which notable examples are bipartite ranking, metric learning, area under ROC (AUC) maximization and minimum error entropy (MEE) principle. Our contribution is twofold. Firstly, we establish the stability results of SGD for pairwise learnin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 33 publications
0
7
0
Order By: Relevance
“…Consequently, on average, the solution was not optimal, because of the interaction between stochasticity and the nonlinearities of the models. Errors of various forms of stochastic optimization have been described in other studies (Ingber, 1993;Shen et al, 2020). For example, errors could arise if the sampling were not truly stochastic.…”
Section: Minimization Of Regretmentioning
confidence: 91%
“…Consequently, on average, the solution was not optimal, because of the interaction between stochasticity and the nonlinearities of the models. Errors of various forms of stochastic optimization have been described in other studies (Ingber, 1993;Shen et al, 2020). For example, errors could arise if the sampling were not truly stochastic.…”
Section: Minimization Of Regretmentioning
confidence: 91%
“…A (randomized) algorithm A for pairwise learning is called ε-uniformly argument stable if for all neighboring datasets S, S ∈ Z n we have [7]. The connection between the uniform stability for pairwise learning and its generalization has been established in the literature [1,34]. Lemma 1.…”
Section: Stability and Generalization Errorsmentioning
confidence: 99%
“…[32], [89] consider differential privacy problems in pairwise setting. [77] uses stability to study the tradeoff between the generalization error and optimization error for a variant of pairwise SGD. [39] starts the studying of pairwise learning framework via algorithmic stability.…”
Section: Related Workmentioning
confidence: 99%
“…Secondly, they mostly require convexity conditions [41]. In the related work of learning the unified pairwise framework, [34], [52], [85] investigate the online pairwise learning, which is different from the offline setting of this paper, while [67], [77] study the variants of stochastic gradient descent (SGD). The most related work to this paper is [39], [40], [41].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation