2009
DOI: 10.1007/s11590-009-0132-y
|View full text |Cite
|
Sign up to set email alerts
|

A nonmonotone truncated Newton–Krylov method exploiting negative curvature directions, for large scale unconstrained optimization

Abstract: We propose a new truncated Newton method for large scale unconstrained optimization, where a Conjugate Gradient (CG)-based technique is adopted to solve Newton's equation. In the current iteration, the Krylov method computes a pair of search directions: the first approximates the Newton step of the quadratic convex model, while the second is a suitable negative curvature direction. A test based on the quadratic model of the objective function is used to select the most promising between the two search directio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
19
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 28 publications
(20 citation statements)
references
References 20 publications
1
19
0
Order By: Relevance
“…For the ridge filter, problem (25) was solved by employing the truncated-Newton method reported in [7], terminating the algorithm when the sup-norm of the gradient of the objective function was less than or equal to 10 −5 . Table 1: Comparison between the values of the Adjusted Rand Index obtained by applying Single Linkage (SL), Expectation-Maximization for Gaussian mixture Models (EMGM) and Kernel K-Means (KKM) to the original data and to the filtered data.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the ridge filter, problem (25) was solved by employing the truncated-Newton method reported in [7], terminating the algorithm when the sup-norm of the gradient of the objective function was less than or equal to 10 −5 . Table 1: Comparison between the values of the Adjusted Rand Index obtained by applying Single Linkage (SL), Expectation-Maximization for Gaussian mixture Models (EMGM) and Kernel K-Means (KKM) to the original data and to the filtered data.…”
Section: Resultsmentioning
confidence: 99%
“…Since y / ∈ C, ξ ∈ (0, 1], and taking into account (7) of Lemma 1, we obtain that x − y 2 − x −ỹ 2 > 0. Lemma 3.…”
Section: Properties Of the Approximating Problemmentioning
confidence: 92%
“…In 1986, Grippo et al in [15] applied a nonmonotone globalization technique to Newton's method for solving unconstrained optimization problems with some success. Since then, a variety of proposals related to nonmonotone globalization techniques have been reported in the literature, see for example [5,7,18,19,25,26], among others. Nonmonotone filter methods have been used to promote global and fast local convergence for sequential quadratic programming algorithms [9,14].…”
Section: Introductionmentioning
confidence: 99%
“…Hence, similarly to Proposition 5.1, we can apply without loss of generality the linear transformation in (13) to F, in order to obtain the simplified hypersurfaceF in (14), with centre (x * ,x * 0 ) T in (15). Then, we carry on the proof by induction, recursively defining the linesˆ i , i = 1, .…”
mentioning
confidence: 99%
“…As an example, in [11,12] CG-based methods are used to yield superlinear convergence to an optimal solution of large scale unconstrained minimization problems. Within truncated Newton algorithms CG-based methods are also used to compute negative curvature directions for the objective function [13][14][15]. These directions turn out to be useful in proving the convergence of the algorithm to stationary points, along with the satisfaction of second order optimality conditions.…”
mentioning
confidence: 99%