2013 IEEE 24th International Conference on Application-Specific Systems, Architectures and Processors 2013
DOI: 10.1109/asap.2013.6567593
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid SPM-cache architectures to achieve high time predictability and performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…Kang et al [23] introduced a synergetic memory allocation method to exploit SPM to reduce data cache pollution. Zhang et al [24] studied hybrid on-chip memory architecture that can leverage the SPM to achieve time predictability while exploiting the cache to improve the average-case performance. In the past, Panda et al [25] investigated partitioning scalar and array variables into SPM and data cache to minimize the execution time for embedded applications.…”
Section: Related Workmentioning
confidence: 99%
“…Kang et al [23] introduced a synergetic memory allocation method to exploit SPM to reduce data cache pollution. Zhang et al [24] studied hybrid on-chip memory architecture that can leverage the SPM to achieve time predictability while exploiting the cache to improve the average-case performance. In the past, Panda et al [25] investigated partitioning scalar and array variables into SPM and data cache to minimize the execution time for embedded applications.…”
Section: Related Workmentioning
confidence: 99%
“…T-CREST [49], MERASA [54] and parMERASA [53] projects also have investigated time-predictability focused core architecture, cache, cache coherence protocol, system-bus, and DRAM controller designs [50,24,48,22,43,44,34,35]. There are also many other proposals, which focus on improving timing predictability of each individual shared hardware component-such as time predictable shared caches [61,62,33], hybrid SPM-cache architecture [65], and predictable DRAM controllers [60,18,30,15]. In most proposals, the basic approach has been to provide space and time partitioning of hardware resources to each critical real-time task or the cores that are designated to execute such tasks.…”
Section: Related Workmentioning
confidence: 99%