2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS) 2022
DOI: 10.1109/ipdps53621.2022.00078
|View full text |Cite
|
Sign up to set email alerts
|

Compiler-Directed Incremental Checkpointing for Low Latency GPU Preemption

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…This request is first received by the VM's GPU driver, which places it in a waiting queue (1). As GPU requests often depend on others, these dependencies must be resolved before the request attains the "Submitted" state (2). Once all preceding requests in the queue have been executed, the guest OS's GPU driver removes the request from its waiting queue and forwards it to the vGPU for execution (3).…”
Section: Data Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…This request is first received by the VM's GPU driver, which places it in a waiting queue (1). As GPU requests often depend on others, these dependencies must be resolved before the request attains the "Submitted" state (2). Once all preceding requests in the queue have been executed, the guest OS's GPU driver removes the request from its waiting queue and forwards it to the vGPU for execution (3).…”
Section: Data Modelmentioning
confidence: 99%
“…Third, most GPU designs lack inherent sharing mechanisms, leading to a GPU process gaining exclusive access to its resources, thereby blocking other processes from preemption. In addition, several studies have shown that the overhead involved in process preemption is substantially higher in GPUs than in CPUs [2,3]. This increased overhead is essentially due to the higher number of cores and context states in GPUs.…”
Section: Introductionmentioning
confidence: 99%