2016 IEEE Real-Time Systems Symposium (RTSS) 2016
DOI: 10.1109/rtss.2016.026
|View full text |Cite
|
Sign up to set email alerts
|

MARACAS: A Real-Time Multicore VCPU Scheduling Framework

Abstract: This paper describes a multicore scheduling and load-balancing framework called MARACAS, to address shared cache and memory bus contention. It builds upon prior work centered around the concept of virtual CPU (VCPU) scheduling. Threads are associated with VCPUs that have periodically replenished time budgets. VCPUs are guaranteed to receive their periodic budgets even if they are migrated between cores. A load balancing algorithm ensures VCPUs are mapped to cores to fairly distribute surplus CPU cycles, after … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(15 citation statements)
references
References 50 publications
0
14
0
1
Order By: Relevance
“…Cache resource. Several cache partitioning techniques have been proposed to reduce the shared cache interference [9,21,22,26,28,59,61,62,68,74]. The software-based approach reorganizes a task's memory layout to allocate a specific cache area to the task using, e.g., page coloring [28,39,67] or compiler-based [42] techniques.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Cache resource. Several cache partitioning techniques have been proposed to reduce the shared cache interference [9,21,22,26,28,59,61,62,68,74]. The software-based approach reorganizes a task's memory layout to allocate a specific cache area to the task using, e.g., page coloring [28,39,67] or compiler-based [42] techniques.…”
Section: Related Workmentioning
confidence: 99%
“…Without considering this behavior, the resulting mapping may result in poor timing performance, because cache-and memory-sensitive tasks may take much longer to execute if not given sufficient resources, whereas computation-intensive tasks may be given more resources than they strictly require. Prior work has considered cache and memory bandwidth in scheduling [68], but it focuses on soft real-time performance instead of schedulability.…”
Section: Introductionmentioning
confidence: 99%
“…To improve worst-case real-time performance, recent research [15,30] has developed software-based techniques that can provide task-level cache isolation; however, it is limited to only static management, which can substantially under-utilize cache and CPU resources, especially in cases where tasks' timing behavior can change dynamically at run time. Kim et al [13] proposed vCache, a new hardware design for the last-level shared cache that allows a guest OS to control the cache allocation for tasks; however, vCache requires hardware modification and thus cannot be supported by current commodity hardware.…”
Section: Related Workmentioning
confidence: 99%
“…Kim et al [13] proposed vCache, a new hardware design for the last-level shared cache that allows a guest OS to control the cache allocation for tasks; however, vCache requires hardware modification and thus cannot be supported by current commodity hardware. In contrast, vCAT introduces a new virtualization layer for cache partitions on top of the Intel's CAT to provide support for dynamic cache allocation at the task level, which cannot be achieved by both the Intel's CAT itself and the existing cache management for virtualization settings [15,30]. To the best of our knowledge, vCAT is the first to provide dynamic cache management for real-time virtualization systems on commodity multicore platform that can deliver strong cache isolation among tasks and VMs, and it is also the first that uses Intel's CAT in a realtime virtualization setting to achieve task-level cache isolation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation