2021
DOI: 10.1007/s41060-021-00268-y
|View full text |Cite
|
Sign up to set email alerts
|

C-MemMAP: clustering-driven compact, adaptable, and generalizable meta-LSTM models for memory access prediction

Abstract: With the rise of Big Data, there has been a significant effort in increasing compute power through GPUs, TPUs, and heterogeneous architectures. As a result, many applications are memory bound, i.e., they are bottlenecked by the movement of data from main memory to compute units. One way to address this issue is through data prefetching, which relies on accurate prediction of memory accesses. While recent deep learning models have performed well on sequence prediction problems, they are far too heavy in terms o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 19 publications
0
9
0
Order By: Relevance
“…RAOP [74] leverages LSTM-based models for virtual address predictions. C-MemMAP [75] combines clustering and meta-models to reduce the model size. Seq2seq modeling [40] based on LSTM encoder-decoder structure has been applied for memory sequence prediction.…”
Section: Data Prefetchingmentioning
confidence: 99%
See 4 more Smart Citations
“…RAOP [74] leverages LSTM-based models for virtual address predictions. C-MemMAP [75] combines clustering and meta-models to reduce the model size. Seq2seq modeling [40] based on LSTM encoder-decoder structure has been applied for memory sequence prediction.…”
Section: Data Prefetchingmentioning
confidence: 99%
“…Shi et al [56] propose Voyager that predicts both page sequence and page offsets using two LSTM models along with a dot-product attention mechanism. Existing ML-based prefetchers use history memory access sequence to predict the next memory access address [3,16,46,56,61,62,[73][74][75], which leads to a prefetch degree as one. These models require recurrent greedy/beam search or accepting low-probability candidates to realize higher degree prefetching.…”
Section: Data Prefetchingmentioning
confidence: 99%
See 3 more Smart Citations