2002
DOI: 10.1002/nla.288
|View full text |Cite
|
Sign up to set email alerts
|

Efficient approximation of the exponential operator for discrete 2D advection–diffusion problems

Abstract: In this paper we compare Krylov subspace methods with Faber series expansion for approximating the matrix exponential operator on large, sparse, non-symmetric matrices. We consider in particular the case of Chebyshev series, corresponding to an initial estimate of the spectrum of the matrix by a suitable ellipse. Experimental results upon matrices with large size, arising from space discretization of 2D advection-diffusion problems, demonstrate that the Chebyshev method can be an effective alternative to Krylo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
55
0

Year Published

2004
2004
2018
2018

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(55 citation statements)
references
References 26 publications
0
55
0
Order By: Relevance
“…Therefore, the assumption ρ(B) < 1 that is required for the Taylor series to converge is not overly restrictive. (Applications where Krylov and Chebyshev iterations are effective typically have Re(λ(B)) very large and negative [2,12].) Second, the eigenvalue symmetry means that a disk centered on 0, enclosing λ(B), will tend to be a much better approximation of λ(B) than in the general, non-Hamiltonian, case.…”
Section: Other Inexact Newton Iterations (See [10 13 14])mentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, the assumption ρ(B) < 1 that is required for the Taylor series to converge is not overly restrictive. (Applications where Krylov and Chebyshev iterations are effective typically have Re(λ(B)) very large and negative [2,12].) Second, the eigenvalue symmetry means that a disk centered on 0, enclosing λ(B), will tend to be a much better approximation of λ(B) than in the general, non-Hamiltonian, case.…”
Section: Other Inexact Newton Iterations (See [10 13 14])mentioning
confidence: 99%
“…In the inexact Newton method an (e.g., Krylov or Chebyshev) iterative method is applied [2,4,5]; in the Newton-chord method [18] the Jacobian itself is approximated to simplify the solution step. In the Jacobian-free-Newton-Krylov method [12] the Jacobian-vector multiplications are approximated by finite differences, so the Jacobian itself is never formed.…”
Section: Introductionmentioning
confidence: 99%
“…In fact the approach can extended to many other situations where a vector of the form f (A)v is to be computed. The problem of approximating f (A)v has been extensively studied, see, e.g., [28,23,4,16,15], though the attention was primarily focussed on the the case when f is analytic (e.g., f (t) = exp(t)). Problems which involve non continous functions, such as the step function, or the sign function can also be important.…”
Section: Computing F (A)vmentioning
confidence: 99%
“…These methods seems to work better when a good preconditioner for the system matrix is not available for the implicit methods, which is the case when it is desirable not to load the system matrix due to memory requirements, and only the matrix vector product by this matrix is available. Moreover, other studies suggest that polynomial interpolation for the exponential matrix, such as the Real Leja points Method [68], converge as fast as Krylov methods without the memory requirements to save all the vectors defining the Krylov subspace. A Chebyshev approximation for the exponential of a matrix [69,70] is included for completeness, and also higher order exponential integrators based on the Magnus expansion with a commutator free formulation [71,72,73] are tested against a classical second order exponential integrator.…”
Section: Numerical Resultsmentioning
confidence: 99%
“…A well known method to approximate e hA v is based on the Chebyshev polynomial expansion (see for instance [69,68]). …”
Section: Chebyshev Polynomialsmentioning
confidence: 99%