Proceedings of the 50th Annual Design Automation Conference 2013
DOI: 10.1145/2463209.2488916
|View full text |Cite
|
Sign up to set email alerts
|

Integrated instruction cache analysis and locking in multitasking real-time systems

Abstract: Cache locking improves timing predictability at the cost of performance. We explore a novel approach that opportunistically employs both cache analysis and locking to enhance schedulability in preemptive multi-tasking real-time systems. The cache is spatially shared among the tasks by statically locking a portion of the cache per task. To overcome the issue of limited cache space per task, we keep a portion of the cache unlocked and let all the tasks use it through time-multiplexing. Compared to locking the en… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…Thus, improving the behavior of the optimal single-content dynamic locking is not as straightforward as one could think. Indeed, studies claiming better results than single-content dynamic locking specifically state that they remove the line-buffer hardware component, required for locked caches to work properly [21].…”
Section: Placement Of Loading and Locking Pointsmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, improving the behavior of the optimal single-content dynamic locking is not as straightforward as one could think. Indeed, studies claiming better results than single-content dynamic locking specifically state that they remove the line-buffer hardware component, required for locked caches to work properly [21].…”
Section: Placement Of Loading and Locking Pointsmentioning
confidence: 99%
“…Finally, some studies assume partial set-level lockable caches. For each cache set, these caches are able to track a variable number of non-locked lines sorted in LRU order, and lock the remaining ones [21,22]. Control complexity and storage to support this fine-grained locking-replacement surpasses the abilities of conventional caches and, of course, that of fully-lockable caches.…”
Section: Introductionmentioning
confidence: 99%
“…Our partial locking mechanism integrates cache locking with cache modeling, which allows us to estimate the WCET of predictable accesses through cache modeling and optimizes the WCET of unpredictable accesses through cache locking. Compared to full cache locking and static analysis, our partial locking technique achieves better results [7], [8].…”
Section: Introductionmentioning
confidence: 98%