2009
DOI: 10.1016/j.cpc.2008.11.005
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating scientific computations with mixed precision algorithms

Abstract: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FP… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
118
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
4
2
2
1

Relationship

4
5

Authors

Journals

citations
Cited by 173 publications
(120 citation statements)
references
References 46 publications
2
118
0
Order By: Relevance
“…The underlying idea of mixed precision error correction methods is to use different precision formats within the algorithm of the error correction method, updating the solution approximation in high precision, but computing the error correction term in lower precision which has been suggested before [15,14,6,11]. Hence, one regards the inner correction solver as a black box, computing a solution update in lower precision.…”
Section: Algorithm 1: Error Correction Methodsmentioning
confidence: 99%
“…The underlying idea of mixed precision error correction methods is to use different precision formats within the algorithm of the error correction method, updating the solution approximation in high precision, but computing the error correction term in lower precision which has been suggested before [15,14,6,11]. Hence, one regards the inner correction solver as a black box, computing a solution update in lower precision.…”
Section: Algorithm 1: Error Correction Methodsmentioning
confidence: 99%
“…In this way that all operations with computational complexity of complexity. In [8], Baboulin et al also demonstrate by experiments that the maximum number of iterations will be no more than 5 when the condition number of the coefficient matrix A is smaller than 6 10 . The limitation of this algorithm is that the condition number of such a coefficient matrix should not exceed the reciprocal of the accuracy of the single precision; otherwise the double precision algorithm should be used.…”
Section: Related Workmentioning
confidence: 99%
“…And it can be applied easily to various problems in linear algebra. The Algorithm 1 is the mixed precision algorithm to solve the linear system of equations presented in [7,8]. From this algorithm we can find out that the most computationally expensive operation, the factorization of the coefficient A, is performed using single precision arithmetic to take advantage of its higher performance.…”
Section: Related Workmentioning
confidence: 99%
“…As noted in [1], any refinement process is a candidate to benefit from mixed precision computations, since often only the refinement itself needs to be in double precision arithmetic. Rewriting the defect correction scheme from Algorithm 1 into a single expression for iterative refinement of x at iteration k + 1 gives…”
Section: Improving Defect Correction With Mixed Precisionmentioning
confidence: 99%