2012 24th Euromicro Conference on Real-Time Systems 2012
DOI: 10.1109/ecrts.2012.15
|View full text |Cite
|
Sign up to set email alerts
|

Supporting Preemptive Task Executions and Memory Copies in GPGPUs

Abstract: GPGPUs (General Purpose Graphic Processing Units) provide massive computational power. However, applying GPGPU technology to real-time computing is challenging due to the non-preemptive nature of GPGPUs. Especially, a job running in a GPGPU or a data copy between a GPGPU and CPU is non-preemptive. As a result, a high priority job arriving in the middle of a low priority job execution or memory copy suffers from priority inversion. To address the problem, we present a new lightweight approach to supporting pree… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
38
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 78 publications
(39 citation statements)
references
References 17 publications
1
38
0
Order By: Relevance
“…The ways in which GPUs are managed and scheduled differ greatly from CPUs. This has spurred research on supporting GPUs in real-time systems [1,2,3,4,5,6,7,8,9,10]. Still, few have explored multiprocessor, multi-GPU real-time systems.…”
Section: Introductionmentioning
confidence: 99%
“…The ways in which GPUs are managed and scheduled differ greatly from CPUs. This has spurred research on supporting GPUs in real-time systems [1,2,3,4,5,6,7,8,9,10]. Still, few have explored multiprocessor, multi-GPU real-time systems.…”
Section: Introductionmentioning
confidence: 99%
“…In its absence, tricks, like cutting longer requests into smaller pieces, have been shown to enhance GPU interactivity, at least for mutually cooperative applications [6]. True hardware preemption support would save state and safely context-switch from an ongoing request to the next in the GPU queue.…”
Section: Hardware Preemption Supportmentioning
confidence: 99%
“…This situation can cause unfairness between multiple kernels and significantly deteriorate the system responsiveness. Existing GPU scheduling methods address this issue by either killing a long running kernel [Menychtas et al 2014] or providing a kernel split tool [Basaran and Kang 2012;Zhou et al 2015;Margiolas and O'Boyle 2016]. The Pascal architecture allows GPU kernels to be interrupted at instruction-level granularity by saving and restoring each GPU context to and from the GPU's DRAM.…”
Section: Algorithms For Scheduling a Single Gpumentioning
confidence: 99%