2018
DOI: 10.1137/17m1129866
|View full text |Cite
|
Sign up to set email alerts
|

Multiprecision Algorithms for Computing the Matrix Logarithm

Abstract: Abstract. Two algorithms are developed for computing the matrix logarithm in floating point arithmetic of any specified precision. The backward error-based approach used in the state of the art inverse scaling and squaring algorithms does not conveniently extend to a multiprecision environment, so instead we choose algorithmic parameters based on a forward error bound. We derive a new forward error bound for Padé approximants that for highly nonnormal matrices can be much smaller than the classical bound of Ke… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
23
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 26 publications
(23 citation statements)
references
References 32 publications
0
23
0
Order By: Relevance
“…Finally, though we derive the values of θ m in (3.5) for half, single, and double precisions, θ m can be evaluated for any arbitrary precision. Algorithm 5.1 can be extended to be a multiprecision algorithm as in [6] since the function ρ m (3.2) has an explicit expression that is easy to be handled by optimization software.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…Finally, though we derive the values of θ m in (3.5) for half, single, and double precisions, θ m can be evaluated for any arbitrary precision. Algorithm 5.1 can be extended to be a multiprecision algorithm as in [6] since the function ρ m (3.2) has an explicit expression that is easy to be handled by optimization software.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…Later, most of the developed algorithms basically used the inverse scaling and squaring method with Padé approximants to dense or triangular matrices; see [29][30][31][32][33][34][35]. Nevertheless, some new algorithms are based on other methods, among which the following can be highlighted:…”
Section: Introduction and Notationmentioning
confidence: 99%
“…This occurs, for instance, in discretized applications, possibly in a multiresolution framework, or in inexactly weighted linear and nonlinear least‐squares, where the product itself is obtained by applying an iterative procedure . The second is the increasing importance of computations in multiprecision arithmetic on the new generations of high‐performance computers (see References 2‐8 and the many references therein), in which the use of varying levels of floating point precision is a key ingredient for obtaining state‐of‐the‐art energy‐efficient computer architectures. In both cases, using inexact matrix‐vector products (while controlling their inexactness) within the method of conjugate gradients (CG) of Hestenes and Stiefel 9 is a natural option.…”
Section: Introductionmentioning
confidence: 99%