2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS) 2019
DOI: 10.1109/ipdps.2019.00037
|View full text |Cite
|
Sign up to set email alerts
|

Architecting Racetrack Memory Preshift through Pattern-Based Prediction Mechanisms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…RTM's leakage power and capacity advantages give it a competitive edge over existing memory technologies, but the expensive shift operations present a daunting challenge. In this context, various techniques for RTM shift cost reduction have been proposed, such as runtime data swapping [25], [28], [36], data compression [26], [37], preshifting [18], [38], access port management [24], [25], [28], intelligent instruction [39], and data placement [2], [3]. For data placement, Chen et al in [3] present a heuristic appending data objects according to the adjacency information sequentially.…”
Section: Related Workmentioning
confidence: 99%
“…RTM's leakage power and capacity advantages give it a competitive edge over existing memory technologies, but the expensive shift operations present a daunting challenge. In this context, various techniques for RTM shift cost reduction have been proposed, such as runtime data swapping [25], [28], [36], data compression [26], [37], preshifting [18], [38], access port management [24], [25], [28], intelligent instruction [39], and data placement [2], [3]. For data placement, Chen et al in [3] present a heuristic appending data objects according to the adjacency information sequentially.…”
Section: Related Workmentioning
confidence: 99%
“…Other approaches focusing on L2 caches propose header management policies consisting of a hardware prefetcher to predict the next shift operation [11], data block compression schemes [46], and a dynamic adjustment of the number of active bits per racetrack according to the application demands [28]. With the aim to reduce the overhead of shifts, some works propose data placement strategies based on integer linear programming formulations [8,15,16,31] and genetic algorithms [14].…”
Section: Related Workmentioning
confidence: 99%
“…For instance, the shift latency of a 64-domain racetrack can be as long as 63 cycles in the worst case, assuming one cycle to shift a single domain [34], which is even higher than the Last-Level Cache (LLC) latency of modern processors (e.g., by 40 cycles in the Intel Skylake). Because of this fact, numerous research has focused on the LLC and concentrates on novel access header policies to minimize the impact of shift operations [7,11,15,19,28,34,46].…”
Section: Introductionmentioning
confidence: 99%