2016
DOI: 10.1080/10556788.2016.1155213
|View full text |Cite
|
Sign up to set email alerts
|

A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 21 publications
0
13
0
Order By: Relevance
“…We thus ignore them here, but note that this analysis also ensures global convergence to first-order stationary points. 8 Hence the subscript a, for "accurate".…”
Section: The Curtis-robinson-samadi Classmentioning
confidence: 99%
“…We thus ignore them here, but note that this analysis also ensures global convergence to first-order stationary points. 8 Hence the subscript a, for "accurate".…”
Section: The Curtis-robinson-samadi Classmentioning
confidence: 99%
“…Proof. First, we observe that if λ = 0, then s = 0 is the only point that satisfies (2)- (3). So, in the following we consider the case in which λ > 0 (i.e., s = 0).…”
Section: Properties Of Stationary Pointsmentioning
confidence: 99%
“…where c ∈ R n , Q is a symmetric n × n matrix, σ is a positive real number and, here and in the rest of the article, · is the Euclidean norm. In recent years, there has been a growing interest in studying the properties of problem (1), since functions of the form of m(s) are used as local models (to be minimized) in many algorithmic frameworks for unconstrained optimization [14,18,19,17,6,7,12,1,2,4,11,3,5], which have been even extended to the constrained case [16,8,2]. To be more specific, let us consider the unconstrained optimization problem min x∈R n f (x), where f : R n → R is a twice continuously differentiable function.…”
Section: Introductionmentioning
confidence: 99%
“…This research area has been remarkably active in recent years (see, for instance, [24,30,6,8,10,3,20,4,5,28,21,2,1,13]). Adaptive regularization algorithms, the class of methods considered here, compute steps from one iterate to the next by building and (often approximately) minimizing a model consisting of a truncated Taylor expansion of f , which is then "regularized" by adding a suitable power of the norm of the putative step.…”
Section: Introductionmentioning
confidence: 99%