2019
DOI: 10.48550/arxiv.1908.05790
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Task Bench: A Parameterized Benchmark for Evaluating Parallel Runtime Performance

Elliott Slaughter,
Wei Wu,
Yuankun Fu
et al.

Abstract: We present Task Bench, a parameterized benchmark designed to explore the performance of parallel and distributed programming systems under a variety of application scenarios. Task Bench lowers the barrier to benchmarking multiple programming systems by making the implementation for a given system orthogonal to the benchmarks themselves: every benchmark constructed with Task Bench runs on every Task Bench implementation. Furthermore, Task Bench's parameterization enables a wide variety of benchmark scenarios th… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…The author of DASK has performed a series of DASK scaling microbenchmarks [17]. In [18], the authors benchmark several distributed task systems on a set of fundamental task graph shapes to compare their relative overhead and scaling properties. They find out that DASK stops scaling relatively quickly if the task granularity is smaller than ca.…”
Section: Related Workmentioning
confidence: 99%
“…The author of DASK has performed a series of DASK scaling microbenchmarks [17]. In [18], the authors benchmark several distributed task systems on a set of fundamental task graph shapes to compare their relative overhead and scaling properties. They find out that DASK stops scaling relatively quickly if the task granularity is smaller than ca.…”
Section: Related Workmentioning
confidence: 99%