Proceedings of the 31st Annual ACM Symposium on Applied Computing 2016
DOI: 10.1145/2851613.2851670
|View full text |Cite
|
Sign up to set email alerts
|

Balanced loop retiming to effectively architect STT-RAM-based hybrid cache for VLIW processors

Abstract: Loop retiming has been extensively studied to maximize instruction-level parallelism (ILP) of multiple function units by rearranging the dependence delays in a uniform loop. Recently loop retiming technique has been proposed to mitigate the migration overhead of STT-RAM-based hybrid cache by changing the interleaved read and write memory access pattern. However, the previous ILP-aware loop retiming is unaware of its impact on the hybrid cache's migration while the migration-aware loop retiming has not fully co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…However, frequently migrating data in the hybrid cache incurs significant performance and energy overhead. Another researchers used the compilation techniques to optimize the block allocation or migration overhead for hybrid cache with static hints [18]- [20].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, frequently migrating data in the hybrid cache incurs significant performance and energy overhead. Another researchers used the compilation techniques to optimize the block allocation or migration overhead for hybrid cache with static hints [18]- [20].…”
Section: Related Workmentioning
confidence: 99%
“…Some approaches incur frequent block migrations which cause migration overheads [12], [16], [17]. Some compilation techniques require the compiler to provide static hints [18]- [20], which are impractical in some cases. Recent work proposes a trace-based prediction hybrid cache to predict write burst blocks dynamically [21], but this design brings significant overhead that can not be ignored.…”
Section: Introductionmentioning
confidence: 99%