2009
DOI: 10.3846/1392-6292.2009.14.179-186
|View full text |Cite
|
Sign up to set email alerts
|

Some Iterative Regularized Methods for Highly Nonlinear Least Squares Problems

Abstract: Abstract. This report treats numerical methods for highly nonlinear least squares problems for which procedural and rounding errors are unavoidable, e.g. those arising in the development of various nonlinear system identification techniques based on input-output representation of the model such as training of artificial neural networks. Let F be a Frechet-differentiable operator acting between Hilbert spaces H1 and H2 and such that the range of its first derivative is not necessarily closed. For solving the eq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2013
2013
2013
2013

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…According to Theorem 1, the theoretical asymptotic reduction rate of the error is guaranteed to be at least β for sufficiently large β < 1 and the numerical results confirm the validity of the estimate: for β ∈ {0.9, 0.8, 0.7} the observed reduction rate of the error is approaching the theoretical value. Our theorem does not cover the behaviour of the method if the parameter β is too small (less than q defined in (8), which is unfortunately unknown since the coefficient µ 0 is not known). From the numerical results we see that in the case of the test problem the convergence takes place also for relatively small values of β (namely for β ∈ {0.4, 0.2}) but the observed reduction rate is not related to the value of β any more and that a too small value of β does not improve the accuracy of the final result.…”
Section: Numerical Experimentsmentioning
confidence: 92%
See 1 more Smart Citation
“…According to Theorem 1, the theoretical asymptotic reduction rate of the error is guaranteed to be at least β for sufficiently large β < 1 and the numerical results confirm the validity of the estimate: for β ∈ {0.9, 0.8, 0.7} the observed reduction rate of the error is approaching the theoretical value. Our theorem does not cover the behaviour of the method if the parameter β is too small (less than q defined in (8), which is unfortunately unknown since the coefficient µ 0 is not known). From the numerical results we see that in the case of the test problem the convergence takes place also for relatively small values of β (namely for β ∈ {0.4, 0.2}) but the observed reduction rate is not related to the value of β any more and that a too small value of β does not improve the accuracy of the final result.…”
Section: Numerical Experimentsmentioning
confidence: 92%
“…In this paper the data are assumed to be exact and we consider the question of finding a solution of (1) in the least squares sense when F * (x)F (x) is not necessarily continuously invertible for all x in a sufficiently large neighbourhood of the exact solution x * . This paper is a continuation of papers [8,10,13,14] and it treats approximate Gauss-Newton-type methods for solving nonlinear ill-posed problems for which procedural and rounding errors are unavoidable. Frequently the use of finite-difference approximations to the derivatives gives rise to an inexact method.…”
Section: Introductionmentioning
confidence: 99%