2017 IEEE International Conference on Big Data (Big Data) 2017
DOI: 10.1109/bigdata.2017.8257921
|View full text |Cite
|
Sign up to set email alerts
|

Jointly optimizing task granularity and concurrency for in-memory mapreduce frameworks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 19 publications
0
5
0
Order By: Relevance
“…Meanwhile, Spark continues to generate small heap objects, which occasionally get promoted to the oldgeneration heap, and requests extra space there. This triggers frequent major GCs [5].…”
Section: Background and Overviewmentioning
confidence: 99%
See 3 more Smart Citations
“…Meanwhile, Spark continues to generate small heap objects, which occasionally get promoted to the oldgeneration heap, and requests extra space there. This triggers frequent major GCs [5].…”
Section: Background and Overviewmentioning
confidence: 99%
“…Comparison with WASP scheduler. We compare espill with WASP, a state-of-the-art Spark task scheduler [5]. WASP jointly optimizes both task granularity and parallelism based on workload characteristics.…”
Section: Performance Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…Spark provides rich operators and uses them to organize computational logic, but research on Spark operators is still relatively rare [6]. studied the input-output ratio of different operators, to estimate the size of intermediate data in computing process [7].…”
mentioning
confidence: 99%