2016
DOI: 10.1145/2858792
|View full text |Cite
|
Sign up to set email alerts
|

A Survey of Techniques for Cache Locking

Abstract: Cache memory, although important for boosting application performance, is also a source of execution time variability, and this makes its use difficult in systems requiring worst-case execution time (WCET) guarantees. Cache locking is a promising approach for simplifying WCET estimation and providing predictability, and hence, several commercial processors provide ability for locking cache. However, cache locking also has several disadvantages (e.g., extra misses for unlocked blocks, complex algorithms require… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 41 publications
0
12
0
Order By: Relevance
“…In multicore architecture, there may be unexpected cache misses due to execution occurring in other cores. Cache partitioning includes hardware-assisted cache lockdown (cache locking) [4] and software-based cache coloring [3].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In multicore architecture, there may be unexpected cache misses due to execution occurring in other cores. Cache partitioning includes hardware-assisted cache lockdown (cache locking) [4] and software-based cache coloring [3].…”
Section: Related Workmentioning
confidence: 99%
“…Cache lockdown [4] divides the cache into way units and allocates available cache area to each core (or process). Cache lockdown is categorized as static locking or dynamic locking [32][33][34].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, caches could be loaded every time a function is entered (as in [54]), or explicitly through statements in the source code (scratchpad memory). Cache locking [45] provides an alternative mechanism to reach the same effect (which, incidentally, can improve the WCET [16]), and potentially qualifies many more processors for our approach. With these cache requirements, timing effects due to caches can be annotated in the source code as part of our analysis, and do not have to be modeled on instruction granularity.…”
Section: Wcet-amenable Processorsmentioning
confidence: 99%
“…To reduce the analysis time, many studies propose using fully-lockable caches [10], present in processors of most manufacturers, such as Motorola (ColdFire, PowerPC, MPC7451, MPC7400), MIPS32, ARM (904, 946E-S), Integrated Device Technology (79R4650, 79RC64574), Intel 960, etc. These caches, on a miss event, request the missed line to the next memory level, but on arrival they send it to a line buffer, without keeping any copy.…”
Section: Introductionmentioning
confidence: 99%