1982
DOI: 10.1145/355984.355989
|View full text |Cite
|
Sign up to set email alerts
|

LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares

Abstract: An iterative method is given for solving Ax ~ffi b and minU Ax -b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties.Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

9
2,501
0
19

Year Published

1996
1996
2017
2017

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 3,848 publications
(2,529 citation statements)
references
References 12 publications
9
2,501
0
19
Order By: Relevance
“…4(a), a schematic representation of the parametrization used in this study and described above is shown, for a couple of stations A and B and two events i and j. We used the least-square algorithm from Paige & Saunders (1982) with lateral smoothing and norm damping to derive tomographic maps at several periods. The amount of smoothing and damping is subjective, so that a good compromise between them can explain the data (Deschamps et al 2008).…”
Section: Methodsmentioning
confidence: 99%
“…4(a), a schematic representation of the parametrization used in this study and described above is shown, for a couple of stations A and B and two events i and j. We used the least-square algorithm from Paige & Saunders (1982) with lateral smoothing and norm damping to derive tomographic maps at several periods. The amount of smoothing and damping is subjective, so that a good compromise between them can explain the data (Deschamps et al 2008).…”
Section: Methodsmentioning
confidence: 99%
“…We solve the linear system of equations in one step implementing the LSQR algorithm [Paige and Saunders 1982] to determine the source spectra, site responses, and attenuation characteristics, simultaneously. To eliminate the undetermined degree of freedom [Andrews 1986], one or several appropriate sites can be considered as a reference condition by setting either the amplification of one station or the average of several stations to be approximately one, irrespective of frequency.…”
Section: Generalized Inversion Techniquementioning
confidence: 99%
“…For example, the conjugate gradient (CG) method on the normal equation leads to the min-length solution (see Paige and Saunders [20]). In practice, CGLS [16] or LSQR [21] are preferable because they are equivalent to applying CG to the normal equation in exact arithmetic but they are numerically more stable. Other Krylov subspace methods such as the CS method [12] and LSMR [10] can solve (1.1) as well.…”
Section: Least Squares Solversmentioning
confidence: 99%
“…Importantly for large-scale applications, the preconditioning process is embarrassingly parallel, and it automatically speeds up with sparse matrices and fast linear operators. LSQR [21] or the Chebyshev semi-iterative (CS) method [12] can be used at the iterative step to compute the min-length solution within just a few iterations. We show that the latter method is preferred on clusters with high communication cost.…”
Section: Introductionmentioning
confidence: 99%