2019 IEEE Real-Time Systems Symposium (RTSS) 2019
DOI: 10.1109/rtss46320.2019.00049
|View full text |Cite
|
Sign up to set email alerts
|

Cache Persistence Analysis: Finally Exact

Abstract: Cache persistence analysis is an important part of worst-case execution time (WCET) analysis. It has been extensively studied in the past twenty years. Despite these efforts, all existing persistence analyses are approximative in the sense that they are not guaranteed to find all persistent memory blocks.In this paper, we close this gap by introducing the first exact persistence analysis for caches with least-recently-used (LRU) replacement. To this end, we first introduce an exact abstraction that exploits mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…Cache hits take a single cycle, and misses take one look-up cycle plus the cycles required for a memory access. We consider a memory access time of 13 cycles both for instructions and data, which is a realistic value for main memories such as the Automotive DRAM MT46V16M16 [22] clocked at 100 MHz, and has been used in previous studies [16], [23]. For the ACDC, accesses that replace a dirty cache line take a total of 27 cycles to look-up (1), write back the dirty line to memory (13), and bring in the new line from memory (13).…”
Section: Experimental Environmentmentioning
confidence: 99%
“…Cache hits take a single cycle, and misses take one look-up cycle plus the cycles required for a memory access. We consider a memory access time of 13 cycles both for instructions and data, which is a realistic value for main memories such as the Automotive DRAM MT46V16M16 [22] clocked at 100 MHz, and has been used in previous studies [16], [23]. For the ACDC, accesses that replace a dirty cache line take a total of 27 cycles to look-up (1), write back the dirty line to memory (13), and bring in the new line from memory (13).…”
Section: Experimental Environmentmentioning
confidence: 99%
“…The ability to predict such accesses correctly thus determines the accuracy of timing analyses. Especially data caches are, however, highly unpredictable [20], which gave rise to a wide field of research [21] still actively worked on today [22]. The problem is aggravated through inter-core interferences in multi-core shared-memory environments [23].…”
Section: Background and Related Workmentioning
confidence: 99%
“…This simplifies the analysis, but conventional caches usually follow the opposite approach, namely, writeallocate with fetch on write-miss and copyback, which results in fewer memory transfers in general [17]. A recent study on must/may analysis also improves its precision, but it does not consider copybacks either [28]. Moreover, all these previous approaches are based on tracking the specific value of memory addresses, whereas our proposal represents accesses as expressions and abstract relations, so that reuse is marked when it can be asserted that two data references access the same memory address, independently of whether the address is known or unknown.…”
Section: Related Workmentioning
confidence: 99%