1996
DOI: 10.1007/bf01731936
|View full text |Cite
|
Sign up to set email alerts
|

Jacobi-davidson type methods for generalized eigenproblems and polynomial eigenproblems

Abstract: Abstract.In this paper we will show how the Jacobi-Davidson iterative method can be used to solve generalized eigenproblems. Similar ideas as for the standard eigenproblem are used, but the projections, that are required to reduce the given problem to a small manageable size, need more attention. We show that by proper choices for the projection operators quadratic convergence can be achieved. The advantage of our approach is that none of the involved operators needs to be inverted. It turns out that similar p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
236
0
3

Year Published

1999
1999
2010
2010

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 236 publications
(239 citation statements)
references
References 25 publications
0
236
0
3
Order By: Relevance
“…In practice, this means that we need to solve the correction equation, i.e., the equation which updates the current approximate eigenvector, in a subspace that is orthogonal to the most current approximate eigenvectors. Several methods can be mentioned including the Trace Minimization method [12,11], the Davidson method [4,9] and the Jacobi-Davidson approach [14,13,16]. Most of these methods update an existing approximation by a step of Newton's method and this was illustrated in a number of papers, see, e.g., [8], and in [17].…”
Section: Introductionmentioning
confidence: 99%
“…In practice, this means that we need to solve the correction equation, i.e., the equation which updates the current approximate eigenvector, in a subspace that is orthogonal to the most current approximate eigenvectors. Several methods can be mentioned including the Trace Minimization method [12,11], the Davidson method [4,9] and the Jacobi-Davidson approach [14,13,16]. Most of these methods update an existing approximation by a step of Newton's method and this was illustrated in a number of papers, see, e.g., [8], and in [17].…”
Section: Introductionmentioning
confidence: 99%
“…After discretization, we end up with a cubic matrix eigenvalue problem, whose solution requires an efficient numerical algorithm. For this purpose a variant of the Jacobi-Davidson method [11], in which the convergence is accelerated significantly by combining it with a multilevel approach, has been elaborated [10].The poloidal dependence of equilibrium plasma parameters is determined from the heat and pressure balance equations integrated over the radial width of the edge region where neutrals are mostly ionized. Conditions for the MARFE formation in the Tokamak Experiment for Technology Oriented Research (TEXTOR) [13] are analyzed and the importance to calculate the characteristics of drift instabilities and anomalous transport self-consistently with the poloidal structure of plasma parameters is elucidated.…”
mentioning
confidence: 99%
“…After discretization, we end up with a cubic matrix eigenvalue problem, whose solution requires an efficient numerical algorithm. For this purpose a variant of the Jacobi-Davidson method [11], in which the convergence is accelerated significantly by combining it with a multilevel approach, has been elaborated [10].…”
mentioning
confidence: 99%
“…VIII] or twosided Jacobi-Davidson [11,12,9]. Equation (3.2) implies that if u is a first order accurate approximation to the right eigenvector (that is, u = x + d, with d small), and v is a first order accurate approximation to the left eigenvector (v = y + e, with e small), then θ(u, v) is a second order accurate approximation to the eigenvalue (|θ − λ| = O( d e ).…”
mentioning
confidence: 99%
“…Subspace expansion for the polynomial eigenvalue problem. We take a new look at the Jacobi-Davidson subspace expansion for the polynomial eigenvalue problem (see [11] and [9] for previous work). Assume that we have an approximate eigenpair (θ, u), where the residual r = p(θ) u is orthogonal to a certain test vector v. We are interested in an update s ⊥ u such that p(λ)(u + s) = 0, that is, u + s is a multiple of the true eigenvector.…”
mentioning
confidence: 99%