Proceedings 16th Annual International Symposium on High Performance Computing Systems and Applications
DOI: 10.1109/hpcsa.2002.1019151
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Gaussian elimination using OpenMP and MPI

Abstract: In this paper, we have presented a parallel algorithm for Gaussian Elimination. Elimination in both a shared memory environment, using OpenMP, and in a distributed memory environment, using MPI. Parallel LU and Gaussian algorithms for linear systems have been studied extensively and the point of this paper is to present the results of examining various load balancing schemes on both platforms. The results show an improvement in many cases over the default implementation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(10 citation statements)
references
References 4 publications
0
10
0
Order By: Relevance
“…Let us say that we need to invert matrix A. If B is the inverse of A, then: (13) Where, I is the identity matrix. If we can break A into two matrices such that one is lower triangular matrix and the other is upper triangular matrix then: (14) Where, L is lowering triangular and U is upper triangular matrix.…”
Section: B Decomposition Of the Matrix Problemmentioning
confidence: 99%
“…Let us say that we need to invert matrix A. If B is the inverse of A, then: (13) Where, I is the identity matrix. If we can break A into two matrices such that one is lower triangular matrix and the other is upper triangular matrix then: (14) Where, L is lowering triangular and U is upper triangular matrix.…”
Section: B Decomposition Of the Matrix Problemmentioning
confidence: 99%
“…In (15) and (16) [20,21]. Besides, a one-dimensional (1d) element used for links and prestressed anchors is defined in the software.…”
Section: Creep Model Zhumentioning
confidence: 99%
“…The situation has changed since the arrival of a variety of high performance computers and the advances in parallel computing techniques, that is, parallel algorithms and parallel platforms. Parallel algorithms on different platforms, that is, algorithms utilizing OpenMP and MPI, are studied by [13][14][15][16]. Recently, the GPU high performance computing has been popular.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the results of examining various load balancing schemes on both platforms, it shows that in OpenMP, Data can be made available to all processors at all times. But in case of MPI, when authors increase the value of n, the program displays an improvement in performance [1]. The Gauss elimination method without pivoting was introduced to explain the concepts both in OpenMP and MPI where in MPI, communication capabilities between the nodes increases apparently along with the floating point speed [2].Finite algorithms for solving system of linear equations using Gauss elimination method was designed for shared memory, distributed memory and also for their combination.…”
Section: Literature Reviewmentioning
confidence: 99%