2013 International Conference on Parallel and Distributed Systems 2013
DOI: 10.1109/icpads.2013.39
|View full text |Cite
|
Sign up to set email alerts
|

Real Asynchronous MPI Communication in Hybrid Codes through OpenMP Communication Tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…We therefore implemented a progress task (similar to Reference 32) which uses MPI_Testsome on the request manager's request queue to make progress on outstanding MPI requests. In line with the polling, the task is started prior to the first time step and reschedules itself.…”
Section: Methodsmentioning
confidence: 99%
“…We therefore implemented a progress task (similar to Reference 32) which uses MPI_Testsome on the request manager's request queue to make progress on outstanding MPI requests. In line with the polling, the task is started prior to the first time step and reschedules itself.…”
Section: Methodsmentioning
confidence: 99%
“…Bamboo [24] is a source-to-source translator that translates an MPI C program into a data-driven form that overlaps communication with computation. Buettner et al [7] tried to address the issue of communication-computation overlap by extending the OpenMP runtime to include communication tasks. HCMPI (Habanero-C MPI) [9], integrates Habanero-C dynamic task-parallel programming model with the MPI message-passing interface.…”
Section: Related Workmentioning
confidence: 99%
“…Asynchronous communications progression in hybrid MPI+OpenMP tasks programming is discussed by David Buettner et al in [2]. They proposed an OpenMP extension to mark tasks that contain MPI communications.…”
Section: Related Workmentioning
confidence: 99%
“…Each MPI process has its own OpenMP scheduler, which executes tasks according to their precedence constraints (expressed through the depend clause) and their priorities. To address the loss of cores issue introduced by a blocking MPI communication calls, and to keep asynchronous communication progression, we propose a mix of User Level Threads (ULT), TAMPI, and MPI_Detach [2,9,16,18]. On the MPI runtime side, whenever a thread is about to block, it injects a communication progression polling function inside OpenMP, to be called on every scheduling point -as it was proposed by D. Buettner and al.…”
Section: Interoperation Between Mpi and Openmp Runtimesmentioning
confidence: 99%
See 1 more Smart Citation