2022
DOI: 10.1016/j.parco.2022.102969
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating communication for parallel programming models on GPU systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 2 publications
0
1
0
Order By: Relevance
“…Explicit multi‐node GPU programming using CUDA and MPI can benefit from CUDA‐aware MPI implementations allowing usage of GPU memory pointers for GPU‐GPU communication on various nodes. CUDA inter‐process communication (IPC) allows GPU‐GPU communication that can cross the process boundary 14 with optimizations for intra‐node MPI communication for multi‐GPU nodes shown in paper 15.…”
Section: Related Workmentioning
confidence: 99%
“…Explicit multi‐node GPU programming using CUDA and MPI can benefit from CUDA‐aware MPI implementations allowing usage of GPU memory pointers for GPU‐GPU communication on various nodes. CUDA inter‐process communication (IPC) allows GPU‐GPU communication that can cross the process boundary 14 with optimizations for intra‐node MPI communication for multi‐GPU nodes shown in paper 15.…”
Section: Related Workmentioning
confidence: 99%