Dedicated to Chris Paige for his fundamental contributions to the rounding error analysis of the Lanczos algorithmThe Lanczos and conjugate gradient algorithms were introduced more than five decades ago as tools for numerical computation of dominant eigenvalues of symmetric matrices and for solving linear algebraic systems with symmetric positive definite matrices, respectively. Because of their fundamental relationship with the theory of orthogonal polynomials and Gauss quadrature of the Riemann-Stieltjes integral, the Lanczos and conjugate gradient algorithms represent very interesting general mathematical objects, with highly nonlinear properties which can be conveniently translated from algebraic language into the language of mathematical analysis, and vice versa. The algorithms are also very interesting numerically, since their numerical behaviour can be explained by an elegant mathematical theory, and the interplay between analysis and algebra is useful there too. Motivated by this view, the present contribution wishes to pay a tribute to those who have made an understanding of the Lanczos and conjugate gradient algorithms possible through their pioneering work, and to review recent solutions of several open problems that have also contributed to knowledge of the subject.
Abstract. The generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856-869] for solving linear systems Ax = b is implemented as a sequence of least squares problems involving Krylov subspaces of increasing dimensions. The most usual implementation is modified Gram-Schmidt GMRES (MGS-GMRES). Here we show that MGS-GMRES is backward stable. The result depends on a more general result on the backward stability of a variant of the MGS algorithm applied to solving a linear least squares problem, and uses other new results on MGS and its loss of orthogonality, together with an important but neglected condition number, and a relation between residual norms and certain singular values.Key words. rounding error analysis, backward stability, linear equations, condition numbers, large sparse matrices, iterative solution, Krylov subspace methods, Arnoldi method, generalized minimum residual method, modified Gram-Schmidt, QR factorization, loss of orthogonality, least squares, singular values AMS subject classifications. 65F10, 65F20, 65F25, 65F35, 65F50, 65G50, 15A12, 15A42 DOI. 10.1137/050630416 1. Introduction. Consider a system of linear algebraic equations Ax = b, where A is a given n × n (unsymmetric) nonsingular matrix and b a nonzero n-dimensional vector. Given an initial approximation x 0 , one approach to finding x is to first compute the initial residual r 0 = b − Ax 0 . Using this, derive a sequence of Krylov subspaces. . , in some way, and look for approximate solutions x k ∈ x 0 + K k (A, r 0 ) . Various principles are used for constructing x k , which determine various Krylov subspace methods for solving Ax = b. Similarly, Krylov subspaces for A can be used to obtain eigenvalue approximations or to solve other problems involving A.Krylov subspace methods are useful for solving problems involving very large sparse matrices, since these methods use these matrices only for multiplying vectors, and the resulting Krylov subspaces frequently exhibit good approximation properties. The Arnoldi method [2] is a Krylov subspace method designed for solving the eigenproblem of unsymmetric matrices. The generalized minimum residual method (GMRES) [20] uses the Arnoldi iteration and adapts it for solving the linear system Ax = b. GMRES can be computationally more expensive per step than some other methods; see, for example, Bi-CGSTAB [24] and QMR [9] for unsymmetric A, and LSQR [16] for unsymmetric or rectangular A. However, GMRES is widely used for solving linear systems arising from discretization of partial differential equations, and
Abstract. We consider the finite volume and the lowest-order mixed finite element discretizations of a second-order elliptic pure diffusion model problem. The first goal of this paper is to derive guaranteed and fully computable a posteriori error estimates which take into account an inexact solution of the associated linear algebraic system. We show that the algebraic error can be simply bounded using the algebraic residual vector. Much better results are, however, obtained using the complementary energy of an equilibrated Raviart-Thomas-Nédélec discrete vector field whose divergence is given by a proper weighting of the residual vector. The second goal of this paper is to construct efficient stopping criteria for iterative solvers such as the conjugate gradients, GMRES, or Bi-CGStab. We claim that the discretization error, implied by the given numerical method, and the algebraic one should be in balance, or, more precisely, that it is enough to solve the linear algebraic system to the accuracy which guarantees that the algebraic part of the error does not contribute significantly to the whole error. Our estimates allow a reliable and cheap comparison of the discretization and algebraic errors. One can thus use them to stop the iterative algebraic solver at the desired accuracy level, without performing an excessive number of unnecessary additional iterations. Under the assumption of the relative balance between the two errors, we also prove the efficiency of our a posteriori estimates, i.e., we show that they also represent a lower bound, up to a generic constant, for the overall energy error. A local version of this result is also stated. Several numerical experiments illustrate the theoretical results.Key words. Second-order elliptic partial differential equation, finite volume method, mixed finite element method, a posteriori error estimates, iterative methods for linear algebraic systems, stopping criteria.
It is demonstrated that finite precision Lanczos and conjugate gradient computations for solving a symmetric positive definite linear system Ax b or computing the eigenvalues ofA behave very similarly to the exact algorithms applied to any of a certain class of larger matrices. This class consists of matrices which have many eigenvalues spread throughout tiny intervals about the eigenvalues ofA. The width ofthese intervals is a modest multiple of the machine precision times the norm of A. This analogy appears to hold, provided only that the algorithms are not run for huge numbers of steps. Numerical examples are given to show that many of the phenomena observed in finite precision computations with A can also be observed in the exact algorithms applied to such a matrix .4.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.