2013
DOI: 10.1002/cpe.3186
|View full text |Cite
|
Sign up to set email alerts
|

A comparative performance study of common and popular task‐centric programming frameworks

Abstract: Programmers today face a bewildering array of parallel programming models and tools, making it difficult to choose an appropriate one for each application. An increasingly popular programming model supporting structured parallel programming patterns in a portable and composable manner is the task-centric programming model. In this study, we compare several popular task-centric programming frameworks, including Cilk Plus, Threading Building Blocks, and various implementations of OpenMP 3.0. We have analyzed the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…Also, other computer platforms may not exhibit the pronounced non-uniform memory access (NUMA) effects common to the AMD Magny-Cours architecture, and we may expect the parallel performance of SpAMM omp on those platforms to show improved scaling. Finally, it is known that the runtime has significant impact on the performance of SpAMM-like workloads [117], and other programming frameworks Table 4.1 for fitting parameters.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Also, other computer platforms may not exhibit the pronounced non-uniform memory access (NUMA) effects common to the AMD Magny-Cours architecture, and we may expect the parallel performance of SpAMM omp on those platforms to show improved scaling. Finally, it is known that the runtime has significant impact on the performance of SpAMM-like workloads [117], and other programming frameworks Table 4.1 for fitting parameters.…”
Section: Discussionmentioning
confidence: 99%
“…Also, other computer platforms may not exhibit the pronounced non-uniform memory access (NUMA) effects common to the AMD Magny-Cours architecture, and we may expect the parallel performance of SpAMM omp on those platforms to show improved scaling. Finally, it is known that the runtime has significant impact on the performance of SpAMM-like workloads [117], and other programming frameworks might lead to improved parallel scaling. These satisfactory results follow from over-decomposition of the three-dimensional convolution space, relative to conventional methods that involve decomposition in one or two dimensions, and from runtime systems that support the irregular task parallelism inherent in the generalized N -Body solvers framework.…”
Section: Discussionmentioning
confidence: 99%
“…A benchmarking study of several versions of the three shared memory systems mentioned above, which includes an investigation of task granularity, is presented in [6]. The authors find that some parallel systems handle finer granularity tasks better or worse than others, and that this also depends on the type of workload being benchmarked.…”
Section: Tiny Tasks In Shared Memory Clustersmentioning
confidence: 99%
“…Related measurements have been done before, either to develop practical guidance for improving cluster performance [2], [3], or in evaluating distributed schedulers that could support larger degrees of parallelism [5]. In [6] the effects of task granularity, and flat vs recursive task spawning, on performance are investigated for several shared-memory task-centric parallel systems. In our case we focus on statistically principled experiments that use tasks with service times drawn from controlled distributions.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation