2014
DOI: 10.1137/130931588
|View full text |Cite
|
Sign up to set email alerts
|

Preconditioned Iterative Methods for Solving Linear Least Squares Problems

Abstract: Abstract. New preconditioning strategies for solving m × n overdetermined large and sparse linear least squares problems using the conjugate gradient for least squares (CGLS) method are described. First, direct preconditioning of the normal equations by the balanced incomplete factorization (BIF) for symmetric and positive definite matrices is studied, and a new breakdown-free strategy is proposed. Preconditioning based on the incomplete LU factors of an n × n submatrix of the system matrix is our second appro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
18
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 55 publications
0
18
0
Order By: Relevance
“…Introduction. In recent years, a number of methods have been proposed for preconditioning sparse linear least-squares problems; a brief overview with a comprehensive list of references is included in the introduction to the paper of Bru et al [4]. The recent study of Gould and Scott [20,21] reviewed many of these methods (specifically those for which software has been made available) and then tested and compared their performance using a range of examples coming from practical applications.…”
mentioning
confidence: 99%
“…Introduction. In recent years, a number of methods have been proposed for preconditioning sparse linear least-squares problems; a brief overview with a comprehensive list of references is included in the introduction to the paper of Bru et al [4]. The recent study of Gould and Scott [20,21] reviewed many of these methods (specifically those for which software has been made available) and then tested and compared their performance using a range of examples coming from practical applications.…”
mentioning
confidence: 99%
“…Since A T A is much denser than A, to get a sparse preconditioner a lot of entries should be dropped, and some instabilities may appear. To prevent this possibility two additional strategies are used in [11] to improve robustness. The first one is to replace the way that the pivots are computed.…”
Section: Bif For Least Squares Problemsmentioning
confidence: 99%
“…Instead of to compute them as in (10), that is as sr i , they are computed as z T i A T Az i . The second one follows the ideas of Tismenetsky in [18], by storing additional entries that are used only in computations, but finally discarded a better preconditioner is computed, see Algorithm 2.2 in [11]. We refer to this preconditioner as lsBIF.…”
Section: Bif For Least Squares Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…The main contribution of this paper is that we propose a real Jacobian-free preconditioner construction technique for solving nonlinear least square problems. Compared with other existing preconditioning techniques [1,3,5,35], our method requires neither the computation of the Jacobian matrix in any iteration step, the sparsity or special structure of the Jacobian matrix, nor the subroutine for the gradient computation [19]. It only requires the objective function F(x) and subroutines to compute J (x k )v and J (x k ) T w. In other words, our implicit preconditioner construction can be used to determine preconditioners for all these existing Jacobian-free methods for nonlinear equations and NLS problems.…”
Section: Introductionmentioning
confidence: 99%