In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobi's approach, combined with Davidson's method, leads to a new method that has improved convergence properties and that may be used for general matrices. We also propose a variant of the new method that may be useful for the computation of nonextremal eigenvalues as well.
Recently the Jacobi-Davidson subspace iteration method has been introduced as a new powerful technique for solving a variety of eigenproblems. In this paper we will further exploit this method and enhance it with several techniques so that practical and accurate algorithms are obtained. We will present two algorithms, JDQZ for the generalized eigenproblem and JDQR for the standard eigenproblem, that are based on the iterative construction of a (generalized) partial Schur form. The algorithms are suitable for the efficient computation of several (even multiple) eigenvalues and the corresponding eigenvectors near a user-specified target value in the complex plane. An attractive property of our algorithms is that explicit inversion of operators is avoided, which makes them potentially attractive for very large sparse matrix problems.We will show how effective restarts can be incorporated in the Jacobi-Davidson methods, very similar to the implicit restart procedure for the Arnoldi process. Then we will discuss the use of preconditioning, and, finally, we will illustrate the behavior of our algorithms by a number of wellchosen numerical experiments.
Abstract.In this paper we will show how the Jacobi-Davidson iterative method can be used to solve generalized eigenproblems. Similar ideas as for the standard eigenproblem are used, but the projections, that are required to reduce the given problem to a small manageable size, need more attention. We show that by proper choices for the projection operators quadratic convergence can be achieved. The advantage of our approach is that none of the involved operators needs to be inverted. It turns out that similar projections can be used for the iterative approximation of selected eigenvalues and eigenvectors of polynomial eigenvalue equations. This approach has already been used with great success for the solution of quadratic eigenproblems associated with acoustic problems.
There is a class of linear problems for which the computation of the matrix-vector product is very expensive since a time consuming method is necessary to approximate it with some prescribed relative precision. In this paper we investigate the impact of approximately computed matrix-vector products on the convergence and attainable accuracy of several Krylov subspace solvers. We will argue that the sensitivity towards perturbations is mainly determined by the underlying way the Krylov subspace is constructed and does not depend on the optimality properties of the particular method. The obtained insight is used to tune the precision of the matrix-vector product in every iteration step in such a way that an overall efficient process is obtained. Our analysis confirms the empirically found relaxation strategy of Bouras and Frayssé for the GMRES method proposed in [A Relaxation Strategy for Inexact Matrix-Vector Products for Krylov Methods,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.