1988
DOI: 10.21236/ada206388
|View full text |Cite
|
Sign up to set email alerts
|

Preconditioned Krylov Solvers and Methods for Runtime Loop Parallelization

Abstract: We make a detailed examination lwas made of the performance achieved by a Krylov space sparse linear system solver that uses incompletely factored matrices for preconditioners. We compared two related mechanisms for parallelizing the computationally critical sparse triangular solves and sparse numeric incomplete factorizations on a range of test problems. From these comparisions we drew several interesting conclusions about methods that can be used to parallelize loops of the type found here. The performance w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

1989
1989
1991
1991

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…In order to satisfy data dependences, processors busy-wait for rows on which they are dependent. The Wavefront algorithm, studied by , Baxter et al [1988], Sadayappan and Visvanathan [1988], and Liu [1986], avoids this excess busy-waiting by sorting the rows that can be executed in parallel into groups (wavefronts). The processors are assigned rows from the wavefronts in a round-robin fashion.…”
Section: Parallel Sparse Solversmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to satisfy data dependences, processors busy-wait for rows on which they are dependent. The Wavefront algorithm, studied by , Baxter et al [1988], Sadayappan and Visvanathan [1988], and Liu [1986], avoids this excess busy-waiting by sorting the rows that can be executed in parallel into groups (wavefronts). The processors are assigned rows from the wavefronts in a round-robin fashion.…”
Section: Parallel Sparse Solversmentioning
confidence: 99%
“…We have investigated two shared-memory algorithms that solve triangular sparse matrices, the Busy-Wait algorithm and the Wavefront algorithm, which was previously studied by , Baxter et al [1988], and Anderson [1988]. Both algorithms achieved good performance (efficiency >70%) for large (> 2500 rows) sparse matrices.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, for those shared memory machines in which hardware synchronization is available and inexpensive, such as the Alliant FX-8, dynamic scheduling would have some disadvantages since it requires managing queues and generating explicitly busy waits. Both approaches have been tested and compared in [21] where it was concluded that on the Encore Multimax dynamic scheduling is usually preferable except for problems with few synchronization points and large amount of parallelism. In [54] a combination of prescheduling and dynamic scheduling was found to be the best approach on a Sequent balance 21000.…”
Section: Algorithm: Forward Eliiiiinatioii W I T H Level Schedulingmentioning
confidence: 99%
“…In [22,21] and [5,6] a number of additional experiments are presented to study the performance of level scheduling within the context of preconditioned conjugate gradient methods.…”
Section: Eiki-1 In (16)mentioning
confidence: 99%
“…For example in the SPE5 problem, the selfexecuting solve requires 23. 4 milliseconds, the prescheduled solve (in Table 3) required 29.0 milliseconds and the doacross version of the solve took 45.0 milliseconds.…”
Section: Where Does the Time Gomentioning
confidence: 99%