2015
DOI: 10.1080/03081087.2015.1024243
|View full text |Cite
|
Sign up to set email alerts
|

Computation of a function of a matrix with close eigenvalues by means of the Newton interpolating polynomial

Abstract: An algorithm for computing an analytic function of a matrix A is described. The algorithm is intended for the case where A has some close eigenvalues, and clusters (subsets) of close eigenvalues are separated from each other. This algorithm is a modification of some well known and widely used algorithms. A novel feature is an approximate calculation of divided differences for the Newton interpolating polynomial in a special way. This modification does not require to reorder the Schur triangular form and to sol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…If there are close eigenvalues, large rounding errors can occur, see the discussion of this phenomenon in [25]. In such a case, the method of calculating divided differences proposed in [20] can be applied.…”
Section: The Algorithmmentioning
confidence: 99%
“…If there are close eigenvalues, large rounding errors can occur, see the discussion of this phenomenon in [25]. In such a case, the method of calculating divided differences proposed in [20] can be applied.…”
Section: The Algorithmmentioning
confidence: 99%
“…Finally, the function f of diagonal blocks (having small spectrum) can be calculated using the Taylor (or interpolating) polynomial; then the remaining blocks of F can be calculated using the block version of (1); note that the absence of t jj − t ii with close t jj and t ii in the neighboring blocks requires that the clusters be separated from each other. A modification of this algorithm that does not use a reordering was discussed in [36].…”
Section: Introductionmentioning
confidence: 99%
“…This matrix representation is diagonal, but it can be 'bad' in the sense that the corresponding projectors have large norms; in such a case it may be convenient to replace one of the subspaces by the orthogonal (or close to orthogonal) complement of the other; as a result one will arrive at a triangular matrix representation of A. Similarly, the spectrum of A may be divided into clusters; so, it is again natural to use a diagonal or triangular matrix representation; the phenomenon of clusterization is discussed, e.g., in [21, lecture 12], [11,37]. Representation by triangular operator matrices is also natural for causal operators; in their turn, causal operators are widely used in control theory [13,16,53] and functional differential equations [34,35,36].…”
Section: Introductionmentioning
confidence: 99%