2022
DOI: 10.1016/j.parco.2021.102870
|View full text |Cite
|
Sign up to set email alerts
|

Linear solvers for power grid optimization problems: A review of GPU-accelerated linear solvers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 30 publications
(13 citation statements)
references
References 37 publications
0
13
0
Order By: Relevance
“…LDL T factorization via MA57 [14] has been used effectively for extremely sparse problems on traditional CPU-based platforms, but is not suitable for fine grain parallelization required for GPU acceleration. Parallel and GPU accelerated direct solve implementations such as SuperLU [1,25], STRUMPACK [39,45], and PaStiX [20,32] exist for general symmetric indefinite systems (although the first two are designed for general systems), but these software packages are designed to take advantage of dense blocks of the matrices in denser problems and do not perform well on our systems of interest, which do not yield these dense blocks [19,46].…”
Section: Solving Kkt Linear Systemsmentioning
confidence: 99%
See 3 more Smart Citations
“…LDL T factorization via MA57 [14] has been used effectively for extremely sparse problems on traditional CPU-based platforms, but is not suitable for fine grain parallelization required for GPU acceleration. Parallel and GPU accelerated direct solve implementations such as SuperLU [1,25], STRUMPACK [39,45], and PaStiX [20,32] exist for general symmetric indefinite systems (although the first two are designed for general systems), but these software packages are designed to take advantage of dense blocks of the matrices in denser problems and do not perform well on our systems of interest, which do not yield these dense blocks [19,46].…”
Section: Solving Kkt Linear Systemsmentioning
confidence: 99%
“…literature showing that multifrontal or supernodal approaches are not suitable for very sparse and irregular systems, where the dense blocks become too small, leading to an unfavorable ratio of communication versus computation [7,12,19]. This issue is exacerbated when supernodal or multifrontal approaches are used for fine-grain parallelization on GPUs [46]. Our method becomes better when the ratio of LDL T to Cholesky factorization time grows, because factorization is the most costly part of linear solvers and our method has more (but smaller and less costly) system solves.…”
Section: Comparison With Ldl Tmentioning
confidence: 99%
See 2 more Smart Citations
“…Unfortunately, factorization of unstructured sparse indefinite matrix is one of these edge cases: unlike dense matrices, sparse matrices have unstructured sparsity, rendering most sparse algorithms difficult to parallelize. Thus, implementing a sparse direct solver on the GPU is nontrivial, and the perfor-mance of current GPU-based sparse linear solvers lags far behind that of their CPU equivalents [36,37]. Previous attempts to solve nonlinear problems on the GPU have circumvented this problem by relying on iterative solvers [7,33] or on decomposition methods [23].…”
Section: Introductionmentioning
confidence: 99%