2020
DOI: 10.1137/19m1286025
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 31 publications
0
9
0
Order By: Relevance
“…Proof. We follow [20] but we provide the proof details for the reader's convenience. First, we observe that multiplying M k by two will not stop until the line search stopping criterion is satisfied.…”
Section: Properties Of the Apdamd Algorithmmentioning
confidence: 99%
“…Proof. We follow [20] but we provide the proof details for the reader's convenience. First, we observe that multiplying M k by two will not stop until the line search stopping criterion is satisfied.…”
Section: Properties Of the Apdamd Algorithmmentioning
confidence: 99%
“…However, we also show that (while somewhat helpful for O cr with a conservative choice of H), adding momentum to well-tuned or adaptive second-order methods is harmful in logistic regression: simply iterating our oracle-or, better yet, applying Newton's method-dramatically outperforms all "accelerated" algorithms. This important fact seems to have gone unobserved in the literature on accelerated second-order methods, despite logistic regression appearing in many related experiments [40,16,30,25]. Simply iterating our adaptive oracle outperforms the classical accelerated gradient descent, and performs comparably to L-BFGS.…”
Section: Our Contributionsmentioning
confidence: 94%
“…We propose a novel way to update the regularization parameters in the third-order models used to compute trial points. Different from existing adaptive tensor methods [13,16], in our new methods the regularization parameters are adjusted taking into account the progress of the inner solver. When the regularization parameter is sufficiently large, the inner solver is guaranteed to have linear rate of convergence.…”
Section: Motivationmentioning
confidence: 99%