2019 15th International Conference on Network and Service Management (CNSM) 2019
DOI: 10.23919/cnsm46954.2019.9012752
|View full text |Cite
|
Sign up to set email alerts
|

Quick Execution Time Predictions for Spark Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2
2

Relationship

1
5

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 9 publications
0
17
0
Order By: Relevance
“…There have been several efforts on modeling the performance of Spark applications [12], [16]- [22] in the absence of interference. These approaches use historical executions of an application to derive a model that can predict the application's execution time under various configurations, e.g., different input data sizes and core allocations.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There have been several efforts on modeling the performance of Spark applications [12], [16]- [22] in the absence of interference. These approaches use historical executions of an application to derive a model that can predict the application's execution time under various configurations, e.g., different input data sizes and core allocations.…”
Section: Related Workmentioning
confidence: 99%
“…A stage consists of a set of tasks with each task applying the operations of the stage on one partition of the data. Spark stages may be variable, i.e., displaying an increase in the number of tasks with an increase in input data, or they could be constant, with the same number of tasks irrespective of input data size [12]. A task is executed by one of the executors allocated by Spark to that stage.…”
Section: Introductionmentioning
confidence: 99%
“…These workload and system parameters are discussed in Section 4.1.2. These workload and system parameters are observed to have a significant effect on the computation time, speedup, and efficiency by other researchers [58]. The selection of the appropriate number of worker nodes and the number of executor cores per worker node heavily influences the performance of the Filter method.…”
Section: Performance Evaluation Of the Filter Methods For Use Case-1mentioning
confidence: 99%
“…The numbers of executor cores used in the experiments are 1, 2, 4, 6, 8, and 12 where the default value is set to 8. The research presented in[58] used up to 40 executor cores where each worker node has 5 executor cores. The master node of Spark uses 1 core.…”
mentioning
confidence: 99%
See 1 more Smart Citation