2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS) 2016
DOI: 10.1109/ipdps.2016.73
|View full text |Cite
|
Sign up to set email alerts
|

Mystic: Predictive Scheduling for GPU Based Cloud Servers Using Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(17 citation statements)
references
References 23 publications
0
17
0
Order By: Relevance
“…Some works use collaborative filtering to colocate tasks in clouds by estimating application interference [30]. Others are closer to the application level and use binary classification to distinguish benign memory faults from application errors in order to execute recovery algorithms (see [31] for instance).…”
Section: Data-aware Resource Managementmentioning
confidence: 99%
“…Some works use collaborative filtering to colocate tasks in clouds by estimating application interference [30]. Others are closer to the application level and use binary classification to distinguish benign memory faults from application errors in order to execute recovery algorithms (see [31] for instance).…”
Section: Data-aware Resource Managementmentioning
confidence: 99%
“…Chen et al [7] classify and predict the duration of different GPU tasks and provide QoS support to the concurrent GPU applications by resource reservation. Ukidave et al [40] exploit machine learning to identify the similarities between the arriving kernels and the running kernels and use this technique to avoid QoS violations in a GPU-equipped cluster. Zhang et al [8] propose a runtime system that exploits the newly added spatial multitasking feature in a GPU and raises the accelerator utilization while achieving the latency targets for user-facing services.…”
Section: Related Workmentioning
confidence: 99%
“…Chen et al [19] propose a task duration predictor and a task reordering mechanism based on the predictions to guarantee QoS. Ukidave et al [12] present an interferenceaware mechanism for co-scheduling on GPUs based on machine learning to predict whether kernels can share a GPU efficiently. Wen et al [20] propose a graph-based algorithm to schedule kernels in pairs.…”
Section: Related Workmentioning
confidence: 99%
“…We propose a novel graph-based preemptive coscheduling solution, that defines the kernels submission order focusing on reducing the number of preemptions. The idea is to exploit the kernels interference profile provided by previous works analysis of how the kernels resource usage impacts the interference in their co-execution [11], [12]. More formally, we deal with the following problem: given a co-scheduling speedup matrix S, where S i,j is the speed at which kernel i makes progress when running with j, and the duration of each kernel, what is the best way to co-schedule them, in order to minimize their makespan.…”
Section: Introductionmentioning
confidence: 99%