2011 IEEE 32nd Real-Time Systems Symposium 2011
DOI: 10.1109/rtss.2011.13
|View full text |Cite
|
Sign up to set email alerts
|

RGEM: A Responsive GPGPU Execution Model for Runtime Engines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
76
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 98 publications
(77 citation statements)
references
References 14 publications
1
76
0
Order By: Relevance
“…RGEM [Kato et al 2011c] develops a responsive GPGPU execution model for GPGPU tasks in real-time multi-tasking environments, similarly to TimeGraph [Kato et al 2011d]. RGEM introduces two scheduling methods: Memory-Copy Transaction scheduling and Kernel Launch scheduling.…”
Section: Algorithms For Scheduling a Single Gpumentioning
confidence: 99%
“…RGEM [Kato et al 2011c] develops a responsive GPGPU execution model for GPGPU tasks in real-time multi-tasking environments, similarly to TimeGraph [Kato et al 2011d]. RGEM introduces two scheduling methods: Memory-Copy Transaction scheduling and Kernel Launch scheduling.…”
Section: Algorithms For Scheduling a Single Gpumentioning
confidence: 99%
“…The execution on a modern GPU is shown in Figure 2a, where the kernel with a deadline (K3) does not get scheduled until all previously issued kernels (K1 and K2) have finished executing. A software implementation [16] or a modification to GPU command scheduler could allow priorities to be assigned to processes, resulting in the timeline shown in Figure 2b.…”
Section: Arguments For Preemptive Executionmentioning
confidence: 99%
“…GERM [7] and TimeGraph [17] focus on graphics applications and provide a GPU command schedulers integrated in the device driver. RGEM [16] is a software runtime library targeted at providing responsiveness to prioritized CUDA applications by scheduling DMA transfers and kernel invocations. RGEM implements memory transfers as a series of smaller transfers and thus create the potential preemption points, lowering the stall time due to the competing memory transfers.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…GPUs have received serious consideration in the real-time community only recently. Both theoretical work ( [9], [10]) on partitioning and scheduling algorithms and applied work ( [11], [12], [13]) on qualityof-service techniques and improved responsiveness has been done. Outside the real-time community, others have proposed operating system designs where GPUs are scheduled in much the same way as CPUs [14].…”
Section: Introductionmentioning
confidence: 99%