2023
DOI: 10.1007/s10107-023-01943-7
|View full text |Cite
|
Sign up to set email alerts
|

Gradient regularization of Newton method with Bregman distances

Abstract: In this paper, we propose a first second-order scheme based on arbitrary non-Euclidean norms, incorporated by Bregman distances. They are introduced directly in the Newton iterate with regularization parameter proportional to the square root of the norm of the current gradient. For the basic scheme, as applied to the composite convex optimization problem, we establish the global convergence rate of the order $$O(k^{-2})$$ O ( … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…Few vectors of the same dimension as the problem size are all that's needed for storage. In contrast, approaches like as the Gauss-Newton or BFGS methods necessitate the storage of matrices that, when applied to high-dimensional problems, might grow exceedingly huge [11]. CGM combines the advantages of the Conjugate Gradient method with the Hestenes-Stiefel method's improved convergence properties.…”
Section: Introductionmentioning
confidence: 99%
“…Few vectors of the same dimension as the problem size are all that's needed for storage. In contrast, approaches like as the Gauss-Newton or BFGS methods necessitate the storage of matrices that, when applied to high-dimensional problems, might grow exceedingly huge [11]. CGM combines the advantages of the Conjugate Gradient method with the Hestenes-Stiefel method's improved convergence properties.…”
Section: Introductionmentioning
confidence: 99%