2013 25th Euromicro Conference on Real-Time Systems 2013
DOI: 10.1109/ecrts.2013.19
|View full text |Cite
|
Sign up to set email alerts
|

A Coordinated Approach for Practical OS-Level Cache Management in Multi-core Real-Time Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
105
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 110 publications
(105 citation statements)
references
References 18 publications
0
105
0
Order By: Relevance
“…It is assumed that task preemption does not incur cacherelated preemption delay (CRPD), so H i does not change due to preemption. This assumption is easily satisfied in COTS systems by using cache coloring [16]. However, it is worth noting that our analysis can be easily combined with CRPD analyses such as in [4].…”
Section: System Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…It is assumed that task preemption does not incur cacherelated preemption delay (CRPD), so H i does not change due to preemption. This assumption is easily satisfied in COTS systems by using cache coloring [16]. However, it is worth noting that our analysis can be easily combined with CRPD analyses such as in [4].…”
Section: System Modelmentioning
confidence: 99%
“…11 Software cache partitioning simultaneously partitions the entire physical memory space into the number of cache partitions. Therefore the spatial memory requirement of a task determines the minimum number of cache partitions for that task [16]. (combination of read and write) and the memory-non-intensive task generates up to 1K DRAM requests per msec.…”
Section: A Experimental Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…-Inter-core interference: inter-core interference is present when tasks running on different cores concurrently access a shared level of cache [Kim et al 2013]. When this happens, if two lines in the two addressing spaces of the running tasks map to the same cache line, said tasks can repeatedly evict each other in cache, leading to complex timing interactions and thus unpredictability.…”
Section: Cache Interferencesmentioning
confidence: 99%
“…Software-based approaches have been used to provide both static [14,21] and dynamic [25,29] allocations. Previous work has also explored hardware-based approaches to dynamically allocate cache partitions to tasks, e.g., using the PL310 cache controller [27] or the Intel's CAT [32].…”
Section: Related Workmentioning
confidence: 99%