2018
DOI: 10.1007/s11265-018-1356-9
|View full text |Cite
|
Sign up to set email alerts
|

Static Compiler Analyses for Application-specific Optimization of Task-Parallel Runtime Systems

Abstract: Achieving high performance in task-parallel runtime systems, especially with high degrees of parallelism and fine-grained tasks, requires tuning a large variety of behavioral parameters according to program characteristics. In the current state of the art, this tuning is generally performed in one of two ways: either by a group of experts who derive a single setup which achieves good-but not optimal-performance across a wide variety of use cases, or by monitoring a system's behavior at runtime and responding t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…Peter et al [3] offered a series of new static compiler studies aimed at identifying programme properties that influenced the best settings for a task-parallel runtime environment. The parallel configuration of job spawn, the precision of specific activities, the memory capacity of closures needed for task variables and an estimation of the stack dimension necessary each task were all examples of such aspects.…”
Section: A Related Workmentioning
confidence: 99%
“…Peter et al [3] offered a series of new static compiler studies aimed at identifying programme properties that influenced the best settings for a task-parallel runtime environment. The parallel configuration of job spawn, the precision of specific activities, the memory capacity of closures needed for task variables and an estimation of the stack dimension necessary each task were all examples of such aspects.…”
Section: A Related Workmentioning
confidence: 99%
“…As it can be observed via Table 4, several works predict execution time [13]- [15], [17], [31], [33], [63], while others estimate memory consumption [16], [18], [29]. It can be observed that in [13]- [18], [63], the prediction is done for big data workloads, whereas in [29], [31], [33], the considered applications are routine ones. In [14], a performance prediction framework Ernest is proposed, which can predict the execution time on a hardware configuration, given a job and its input.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, our instrumentation technique is simpler as compared to [31], [33], as only the reachable functions and basic blocks list is generated. For predicting memory consumption, [29] estimates the stack frame size of a given task using static compiler analysis similar to our approach. Table 4, no existing work is found for estimating the heterogeneity ratio.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations