Proceedings of the 48th International Conference on Parallel Processing 2019
DOI: 10.1145/3337821.3337854
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning for Fine-Grained Hardware Prefetcher Control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…ML methods have been gaining traction in place of heuristic methods for achieving superior prefetcher management performance [5], [22], [24], [32], [34]. ML algorithms can extract the non-intuitive interactions between the different prefetchers.…”
Section: B Ml-based Prefetcher Managersmentioning
confidence: 99%
See 1 more Smart Citation
“…ML methods have been gaining traction in place of heuristic methods for achieving superior prefetcher management performance [5], [22], [24], [32], [34]. ML algorithms can extract the non-intuitive interactions between the different prefetchers.…”
Section: B Ml-based Prefetcher Managersmentioning
confidence: 99%
“…Other algorithms, such as neural networks, are too complex and their size increases prohibitively with the size of the dataset [5]. Moreover, some prior works focus on hardware adaptation only from the perspective of accuracy without worrying about the hardware implementation [22], [24], [32], [46].…”
Section: B Ml-based Prefetcher Managersmentioning
confidence: 99%
“…The hardware prefetcher configurations provided by Intel 2 for post-Nehalem microarchitectures provide 4 bits (16 combinations) to control four prefetchers [8,18,26]:…”
Section: Prefetcher Configurationsmentioning
confidence: 99%
“…We tried to only use subsets of many features collected within a single NUMA+Prefetcher configuration, but did not achieve good results. 6 The exploration considers 21 NUMA configurations instead of 18 trained over remaining codes. If the training accuracy is consistently high across the different folds, then it indicates that the model is able to effectively generalize to unseen codes.…”
Section: Model Evaluationmentioning
confidence: 99%
See 1 more Smart Citation