2001
DOI: 10.1137/s1052623400307950
|View full text |Cite
|
Sign up to set email alerts
|

Reduced-Hessian Quasi-Newton Methods for Unconstrained Optimization

Abstract: Quasi-Newton methods are reliable and e cient on a wide range of problems, but they can require many iterations if no good estimate of the Hessian is available or the problem is ill-conditioned. Methods that are less susceptible to ill-conditioning can be formulated by exploiting the fact that quasi-Newton methods accumulate second-derivative information in a sequence of expanding manifolds. These modi ed methods represent the approximate second derivatives by a smaller reduced approximate Hessian. The availab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

1
44
0

Year Published

2003
2003
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 54 publications
(45 citation statements)
references
References 20 publications
1
44
0
Order By: Relevance
“…These reduced-Hessian methods exploit the fact that quasi-Newton methods accumulate approximate curvature in a sequence of expanding subspaces (see Gill and Leonard [13]). Reduced-Hessian methods represent the approximate Hessian using a smaller reduced matrix that increases in dimension at each iteration.…”
mentioning
confidence: 99%
See 3 more Smart Citations
“…These reduced-Hessian methods exploit the fact that quasi-Newton methods accumulate approximate curvature in a sequence of expanding subspaces (see Gill and Leonard [13]). Reduced-Hessian methods represent the approximate Hessian using a smaller reduced matrix that increases in dimension at each iteration.…”
mentioning
confidence: 99%
“…In this paper we propose the limited-memory method L-RHR, which may be viewed as a limited-memory variant of the reduced-Hessian method RHR of Gill and Leonard [13]. L-RHR has two features in common with the limited-memory method of Siegel [33]: a basis of search directions is maintained for the sequence of m-dimensional subspaces, and an implicit orthogonal decomposition is used to define an orthonormal basis for each subspace.…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, only algorithms approximating the Hessian are currently used. The major example is the quasi Newton method [7] which builds up second derivative information by estimating the curvature along a sequence of search directions. However, the length of the sequence is proportional to the number of variables [11], which is problematic in high dimensional optimization.…”
Section: Introductionmentioning
confidence: 99%