2007
DOI: 10.1137/06065622x
|View full text |Cite
|
Sign up to set email alerts
|

Majorizing Functions and Convergence of the Gauss–Newton Method for Convex Composite Optimization

Abstract: Abstract. We introduce a notion of quasi-regularity for points with respect to the inclusion F (x) ∈ C where F is a nonlinear Frechét differentiable function from R v to R m . When C is the set of minimum points of a convex real-valued function h on R m and F satisfies the L-average Lipschitz condition of Wang, we use the majorizing function technique to establish the semi-local linear/quadratic convergence of sequences generated by the Gauss-Newton method (with quasi-regular initial points) for the convex com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
73
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 52 publications
(74 citation statements)
references
References 24 publications
1
73
0
Order By: Relevance
“…We present, under a weak majorant condition, a local convergence analysis for the Gauss-Newton method for injective-overdetermined systems of equations in a Hilbert space setting. Our results provide under the same information a larger radius of convergence and tighter error estimates on the distances involved than in earlier studies such us [10,11,13,14,18]. Special cases and numerical examples are also included in this study.…”
supporting
confidence: 52%
See 1 more Smart Citation
“…We present, under a weak majorant condition, a local convergence analysis for the Gauss-Newton method for injective-overdetermined systems of equations in a Hilbert space setting. Our results provide under the same information a larger radius of convergence and tighter error estimates on the distances involved than in earlier studies such us [10,11,13,14,18]. Special cases and numerical examples are also included in this study.…”
supporting
confidence: 52%
“…. , where x 0 ∈ D is an initial point and F ′ (x n ) + is the Moore-Penrose inverse of the linear operator F ′ (x n ) [7,9,12,14,16,18]. In the present paper we use the proximal Gauss-Newton method (to be precised in Section 2, see (2.6)) for solving penalized nonlinear least squares problem (1.1).…”
Section: Introductionmentioning
confidence: 99%
“…The advantages of our analysis over earlier works such as [8,9,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43] are also shown under the same computational cost for the functions and constants involved. These advantages include: a large radius of convergence and more precise error estimates on the distances x n+1 − x * for each n = 0, 1, 2, .…”
Section: Resultsmentioning
confidence: 60%
“…and a clearer relationship between the majorant function (see (2.8) and the associated least squares problems (1.1)). These advantages are obtained because we use a center-type majorant condition (see (2.11)) for the computation of inverses involved which is more precise that the majorant condition used in [21,22,23,24,25,26,30,31,39,40,41,42,43]. Moreover, these advantages are obtained under the same computational cost, since as we will see in section 3 and section 4, the computation of the majorant function requires the computation of the center-majorant function.…”
Section: Introductionmentioning
confidence: 70%
See 1 more Smart Citation