2016
DOI: 10.48550/arxiv.1603.00717
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods

Lev Bogolubsky,
Pavel Dvurechensky,
Alexander Gasnikov
et al.

Abstract: In this paper, we consider a non-convex loss-minimization problem of learning Supervised PageRank models, which can account for some properties not considered by classical approaches such as the classical PageRank model. We propose gradient-based and random gradient-free methods to solve this problem. Our algorithms are based on the concept of an inexact oracle and unlike the state state-of-the-art gradient-based method we manage to provide theoretically the convergence rate guarantees for both of them. In par… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
1

Relationship

5
1

Authors

Journals

citations
Cited by 8 publications
(18 citation statements)
references
References 11 publications
0
18
0
Order By: Relevance
“…We first show finite termination of the line search subroutine at each iteration, and establish a bound on the total number of function evaluations needed for its execution. The result is a generalization of the arguments in [7,44] for the case of relative smoothness in the nonconvex case. Lemma 5.12.…”
Section: Analysis Of Ahba(µ)mentioning
confidence: 69%
“…We first show finite termination of the line search subroutine at each iteration, and establish a bound on the total number of function evaluations needed for its execution. The result is a generalization of the arguments in [7,44] for the case of relative smoothness in the nonconvex case. Lemma 5.12.…”
Section: Analysis Of Ahba(µ)mentioning
confidence: 69%
“…The same type of dependence on the accuracy and the size of the problem can be seen for the working time (fig. 3): 10 Our experiments on MNIST data set show (see Figures 2, 3, 7) that in practice the bound is better. 11 Strictly speaking for the moment we can not verify all the details of the proof of estimate Õ(n 2 /ε).…”
Section: Numerical Illustrationmentioning
confidence: 82%
“…Further, we define Bregman divergence V [y](x) := d(x) − d(y) − ∇d(y), x − y . Next we define the inexact model of the objective function, which generalizes the inexact oracle of [19] (see also [24,10,28,35,60,62]). Definition 1.…”
Section: Gradient Methods With Inexact Model Of the Objectivementioning
confidence: 99%
See 1 more Smart Citation
“…The literature on first-order methods [8,15,21] considers also gradient methods with inexact information, relaxing the model (2) to…”
Section: Introductionmentioning
confidence: 99%