2022 IEEE 29th Symposium on Computer Arithmetic (ARITH) 2022
DOI: 10.1109/arith54963.2022.00017
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating Variants of the Conjugate Gradient with the Variable Precision Processor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…11: Normalized iteration count and execution time for kernel CG, PCG and BiCG, using random matrices and matrices from SuiteSparse collection of matrices generated according to Ransvd method, using "cliff" profile. Diagonal preconditioning does not work with random matrices, therefore the evaluation runs the original CG algorithm (as given in [39]). The first three curves show execution of three different random matrices of 503, 907 and 1511 diagonal sizes with vectors of different precisions, corresponding to different bit-sizes in memory.…”
Section: Conjugate Gradient (Cg) Solver On Pseudo-random Matricesmentioning
confidence: 99%
See 1 more Smart Citation
“…11: Normalized iteration count and execution time for kernel CG, PCG and BiCG, using random matrices and matrices from SuiteSparse collection of matrices generated according to Ransvd method, using "cliff" profile. Diagonal preconditioning does not work with random matrices, therefore the evaluation runs the original CG algorithm (as given in [39]). The first three curves show execution of three different random matrices of 503, 907 and 1511 diagonal sizes with vectors of different precisions, corresponding to different bit-sizes in memory.…”
Section: Conjugate Gradient (Cg) Solver On Pseudo-random Matricesmentioning
confidence: 99%
“…the Jacobi preconditioning 2. A more detailed description of the implementation of these algorithms on VRP can be found in [39] which only involves the diagonal of the system matrix A. The method is exact in theory, but roundoff errors slow down or even prevent convergence.…”
Section: Solvers On Benchmark Matricesmentioning
confidence: 99%
“…Such arbitrary precision is used where the speed of calculation is not a concern and more precise results are required. Hardware support for multiple-precision arithmetic is not yet widespread, but it can be found in both off-the-shelf systems [12] and ad-hoc devices [13,14].…”
Section: Accuracy and Precisionmentioning
confidence: 99%
“…In addition, we have also studied the accuracy of iterative linear solvers (conjugate gradient and biconjugate gradient) as well-known examples of widely used applications from science and engineering that can benefit from higher accuracy [6]. Results were measured on Big-PERCIVAL running on the Genesys II FPGA board.…”
Section: Introductionmentioning
confidence: 99%