2018
DOI: 10.1007/978-3-319-77398-8_10
|View full text |Cite
|
Sign up to set email alerts
|

DJSB: Dynamic Job Scheduling Benchmark

Abstract: Abstract. High-performance computing (HPC) systems are very big and powerful systems, with the main goal of achieving maximum performance of parallel jobs. Many dynamic factors influence the performance which makes this goal a non-trivial task. According to our knowledge, there is no standard tool to automatize performance evaluation through comparing different configurations and helping system administrators to select the best scheduling policy or the best job scheduler. This paper presents the Dynamic Job Sc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…Cera [9] implements malleability based on dynamic CPUSETs using MPI and a production resource manager. This approach is similar to how we use malleability, but in our case, we do not oversubscribe MPI processes because we demonstrated it could degrade application's performance [23] and we integrate with shared memory programming models for better performance. While supporting MPI for multi-node applications, our approach uses DROM interface [5], that allows malleability in computational nodes by changing the number of threads OpenMP [30] or OmpSs [6] applications are using.…”
Section: Related Workmentioning
confidence: 99%
“…Cera [9] implements malleability based on dynamic CPUSETs using MPI and a production resource manager. This approach is similar to how we use malleability, but in our case, we do not oversubscribe MPI processes because we demonstrated it could degrade application's performance [23] and we integrate with shared memory programming models for better performance. While supporting MPI for multi-node applications, our approach uses DROM interface [5], that allows malleability in computational nodes by changing the number of threads OpenMP [30] or OmpSs [6] applications are using.…”
Section: Related Workmentioning
confidence: 99%
“…A similar approach was presented by [14], based on dynamically changing the operating system CPUSETs for MPI processes, but in this case, there was no integration with the programming model. This approach is equivalent to oversubscription of resources, i.e., more than one process running in the same core, which in general has a negative impact on the applications' performance, as demonstrated in [26]. In our integration, we used OpenMP/OmpSs programming models to adapt the number of threads to the change in the number of computing resources.…”
Section: Related Workmentioning
confidence: 99%
“…methodologies for evaluating the efficiency and effectiveness of a job scheduler which we can separate in benchmarks and simulations. Benchmarks assume a real run of workloads in a cluster, with the purpose of evaluating well-known system metrics [10] or specific aspects of the system that administrator needs to optimize [11], such as the effect of dynamic job scheduling in the context of malleable jobs. However, it is not always possible to stop a production-machine to perform this type of evaluation, so usually, simulations are more convenient and practical to perform.…”
Section: Related Workmentioning
confidence: 99%