2022
DOI: 10.48550/arxiv.2207.12127
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Quantifying Overheads in Charm++ and HPX using Task Bench

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Like HPX, Charm++ also provides facilities for distributed programming (for which, at present, the C++ provides no standard). For a comparison of Charm++ and HPX with OpenMP and MPI (a widely accepted standard for distributed parallel programming) using Task Bench, we refer to [10]. Other notable AMTS are: Chapel [1], X10 [2], and UPC++ [12].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Like HPX, Charm++ also provides facilities for distributed programming (for which, at present, the C++ provides no standard). For a comparison of Charm++ and HPX with OpenMP and MPI (a widely accepted standard for distributed parallel programming) using Task Bench, we refer to [10]. Other notable AMTS are: Chapel [1], X10 [2], and UPC++ [12].…”
Section: Related Workmentioning
confidence: 99%
“…Here, the overhead of using HPX is negligible. For more details on the overheads of HPX and Charm++, we refer to [10]. For HPX's parallel algorithms using hpx::for_each (b), AMD performed better as Intel and Arm is around one order of magnitude less.…”
Section: Performance Comparisonmentioning
confidence: 99%