2020
DOI: 10.48550/arxiv.2010.04786
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reparametrizing gradient descent

Abstract: In this work, we propose an optimization algorithm which we call norm-adapted gradient descent. This algorithm is similar to other gradient-based optimization algorithms like Adam or Adagrad in that it adapts the learning rate of stochastic gradient descent at each iteration. However, rather than using statistical properties of observed gradients, norm-adapted gradient descent relies on a first-order estimate of the effect of a standard gradient descent update step, much like the Newton-Raphson method in many … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 1 publication
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?