2013
DOI: 10.1137/120899194
|View full text |Cite
|
Sign up to set email alerts
|

Abstract Newtonian Frameworks and Their Applications

Abstract: We unify and extend some Newtonian iterative frameworks developed earlier in the literature, which results in a collection of convenient tools for local convergence analysis of various algorithms under various sets of assumptions including strong metric regularity, semistability, or upper-Lipschizt stability, the latter allowing for nonisolated solutions. These abstract schemes are further applied for deriving sharp local convergence results for some constrained optimization algorithms under the reduced smooth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 28 publications
(73 reference statements)
0
8
0
Order By: Relevance
“…Under the conditions of semistability and hemistability (these two conditions are satisfied if Robinson's strong regularity holds at the solution), the author proved the superlinear convergence of the Newton's method (quadratic convergence if f is C 1,1 ). We note that these results were generalized by Izmailov and Solodov [25] and Izmailov and Kurennoy [26] to the case f (x)+F (x) 0 with a smooth single-valued map f and a set-valued mapping F by using an inexact Josephy-Newton method. We note that in the papers [10,25], the authors considered a single-valued approximation, and that in the current paper we authorized the approximation to be set-valued.…”
Section: Commentarymentioning
confidence: 68%
See 2 more Smart Citations
“…Under the conditions of semistability and hemistability (these two conditions are satisfied if Robinson's strong regularity holds at the solution), the author proved the superlinear convergence of the Newton's method (quadratic convergence if f is C 1,1 ). We note that these results were generalized by Izmailov and Solodov [25] and Izmailov and Kurennoy [26] to the case f (x)+F (x) 0 with a smooth single-valued map f and a set-valued mapping F by using an inexact Josephy-Newton method. We note that in the papers [10,25], the authors considered a single-valued approximation, and that in the current paper we authorized the approximation to be set-valued.…”
Section: Commentarymentioning
confidence: 68%
“…The case of single-valued approximations A k : X × X → Y of the function f was studied in [10,25,26] and [12,Section 6C]. It is well known that specific choices of (A k ) k∈N0 lead to various methods for solving (1.3).…”
Section: Instead Of Considering a General Inclusion Findmentioning
confidence: 99%
See 1 more Smart Citation
“…A problem with direct optimization techniques is the number of constraints, which can be large for applied problems. For example, in Sequential Linear Programming [1], which is a widely used direct optimization method [2][3][4][5], the problem is formulated to be solved using simplex algorithm. This requires working with not only the main constraints, but also additional constraints on the upper and lower limits of the variables, which can increase the total number of constraints, signi cantly, for problems with a large number of variables [6], or in sequential quadratic programming [7][8][9], the number of constraints is at least equal to the sum of the number of design variables and the Lagrange multipliers, which can make the procedure of optimization very time-taking.…”
Section: Introductionmentioning
confidence: 99%
“…We refer to [3] and [13,25] for state-of-the-art on global and local convergence properties of Aug-L methods, respectively. For many other issues, see [5].…”
Section: Introductionmentioning
confidence: 99%