2021
DOI: 10.1007/978-3-030-85262-7_14
|View full text |Cite
|
Sign up to set email alerts
|

Communication-Aware Task Scheduling Strategy in Hybrid MPI+OpenMP Applications

Abstract: While task-based programming, such as OpenMP, is a promising solution to exploit large HPC compute nodes, it has to be mixed with data communications like MPI. However, performance or even more thread progression may depend on the underlying runtime implementations. In this paper, we focus on enhancing the application performance when an OpenMP task blocks inside MPI communications. This technique requires no additional effort on the application developers. It relies on an online task re-ordering strategy that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…We assemble applications (LULESH, HPCG, Cholesky) with different parallel characteristics (computation, communication) that will lead to three conclusions. On considered applications, MPI communications are nested into OpenMP task region as permitted by recent published results on MPI interoperability [2,4,10,25,26]. We performed distributed performance evaluations on AMD EPYC 7763 64-core CPU, interconnected with Atos BXI V2 running Open MPI 4.1.4.…”
Section: Impact On Distributed Executionmentioning
confidence: 99%
“…We assemble applications (LULESH, HPCG, Cholesky) with different parallel characteristics (computation, communication) that will lead to three conclusions. On considered applications, MPI communications are nested into OpenMP task region as permitted by recent published results on MPI interoperability [2,4,10,25,26]. We performed distributed performance evaluations on AMD EPYC 7763 64-core CPU, interconnected with Atos BXI V2 running Open MPI 4.1.4.…”
Section: Impact On Distributed Executionmentioning
confidence: 99%
“…It enables the addition of asynchronous callbacks on communication completion but implies heavy code restructuration. TAMPI [22] and MPC [20] propose to transform synchronous to asynchronous MPI calls finely nested into OpenMP tasks. These works tackle the interoperability issue between MPI and OpenMP tasks but do not consider heterogeneous applications.…”
Section: Related Workmentioning
confidence: 99%
“…It also improves standard OpenMP task capabilities with fibers through Linux <ucontext.h> interface for cooperative scheduling. Previous work has been done to finely nest MPI communications into OpenMP(task) [20]. This suspend/resume task mechanism allows generating asynchronism that can be usefully exploited by an application to overlap MPI communications with computations.…”
Section: Openmp Target In Mpcmentioning
confidence: 99%
See 1 more Smart Citation