1986
DOI: 10.1137/0907054
|View full text |Cite
|
Sign up to set email alerts
|

Generalizations of Davidson’s Method for Computing Eigenvalues of Sparse Symmetric Matrices

Abstract: This paper analyzes Davidson's method for computing a few eigenpairs of large sparse symmetric matrices. An explanation is given for why Davidson's method often performs well but occasionally performs very badly. Davidson's method is then generalized to a method which offers a powerful way of applying preconditioning techniques developed for solving systems of linear equations to solving eigenvalue problems. AMS(MOS) subject classifications. 65, 15 2. Davidson's method. Davidson [2] introduced a new method for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
106
0

Year Published

1996
1996
2015
2015

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 146 publications
(107 citation statements)
references
References 6 publications
1
106
0
Order By: Relevance
“…This dynamic criterion comes from the Newton methods and the use in Jacobi-Davidson is suggested in [Fokkema et al, 1999], and tested in [Genseberger, 2010;. It is well-know that in the first iterations of the Davidson-type methods the extraction method usually produces poor eigenpair approximations and the target τ may be a relatively closer approximation to an exact eigenvalue [Morgan and Scott, 1986]. Therefore until the residual norm associated to the selected eigenpair reaches a threshold value so-called fix (that can be set by EPSJDSetFix), the correction equation is solved with θ = τ [Fokkema et al., 1999].…”
Section: ])mentioning
confidence: 99%
See 2 more Smart Citations
“…This dynamic criterion comes from the Newton methods and the use in Jacobi-Davidson is suggested in [Fokkema et al, 1999], and tested in [Genseberger, 2010;. It is well-know that in the first iterations of the Davidson-type methods the extraction method usually produces poor eigenpair approximations and the target τ may be a relatively closer approximation to an exact eigenvalue [Morgan and Scott, 1986]. Therefore until the residual norm associated to the selected eigenpair reaches a threshold value so-called fix (that can be set by EPSJDSetFix), the correction equation is solved with θ = τ [Fokkema et al., 1999].…”
Section: ])mentioning
confidence: 99%
“…The main characteristic of this class of methods is that they expand the subspace with a so-called correction vector t, which is computed from the residual vector r associated to the most wanted eigenpair with the aim of improving it further. This new vector can be computed by simply preconditioning the residual, (6.13) as in the GD method [Davidson, 1975;Morgan and Scott, 1986], or by (approximately) solving the correction equation, (6.14) as in the JD method [Sleijpen and van der Vorst, 2000]. As in (6.13), a preconditioner K can also be introduced in (6.14).…”
Section: Subspace Expansionmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, Davidson [7] suggested K = diag(A) -{) diag(B) (see also [19,8]). In view of our observations in §3 we expect to create better preconditioners by taking the projections into account.…”
Section: Projecting Preconditionersmentioning
confidence: 99%
“…R.EMARK 7.2. In the Davidson methods [5,7,18,19] the search subspace is expanded by the vector K-1 r, the·vector that appears in the first step of the computation of r' (cf. TH.…”
Section: Projecting Preconditionersmentioning
confidence: 99%