2004
DOI: 10.1007/978-3-540-30218-6_15
|View full text |Cite
|
Sign up to set email alerts
|

Minimizing Synchronization Overhead in the Implementation of MPI One-Sided Communication

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2005
2005
2017
2017

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 9 publications
0
8
0
Order By: Relevance
“…I.e., the amount of MPI RMA synchronization calls embedded is in direct proportion to that of MPI RMA communication calls. In fact, the MPI RMA synchronization operation adds a substantial overhead [11]. Hence, this method will definitely lead to great performance loss for applications with considerable RMA communication requests.…”
Section: Heat Conductionmentioning
confidence: 99%
“…I.e., the amount of MPI RMA synchronization calls embedded is in direct proportion to that of MPI RMA communication calls. In fact, the MPI RMA synchronization operation adds a substantial overhead [11]. Hence, this method will definitely lead to great performance loss for applications with considerable RMA communication requests.…”
Section: Heat Conductionmentioning
confidence: 99%
“…Synchronous updates are extremely efficient in terms of execution speed especially as they can utilise two-sided communication using MPI to send and receive primitives. Asynchronous parallelisation schemes have to utilise one-sided communication primitives such as MPI "put" and "get", utilising the remote memory access mechanism, which, although slower, allows for the cell state to be asked for or provided on demand, without the need to wait on some eventual update [19].…”
Section: Equivalence Of Sequential and Parallel Implementationsmentioning
confidence: 99%
“…Synchronous updates are extremely efficient in terms of execution speed especially as they can utilise two-sided communication using MPI "send" and "receive" primitives. Asynchronous parallelisation schemes have to use one-sided communication primitives such as MPI "put" and "get", utilising the remote memory access mechanism, which, although slower, allows for the cell state to be asked for or provided on demand, without the need to wait on some eventual update [16].…”
Section: Equivalence Of Sequential and Parallel Implementationsmentioning
confidence: 99%