2017 IEEE International Symposium on High Performance Computer Architecture (HPCA) 2017
DOI: 10.1109/hpca.2017.65
|View full text |Cite
|
Sign up to set email alerts
|

SWAP: Effective Fine-Grain Management of Shared Last-Level Caches with Minimum Hardware Support

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(25 citation statements)
references
References 29 publications
0
25
0
Order By: Relevance
“…However, these works will not be as effective as before on newer architectures (e.g., Haswell and Skylake), as the mapping between LLC slices and physical addresses changes at a finer granularity than 4kpages. Furthermore, there are a series of works that proposed [6,51,75,76] or exploited [17,63,64,[81][82][83] hardware-based cache partitioning to better use the LLC in order to improve performance. To the best of our knowledge, none of these works considered LLC slice-aware memory management, or slice-aware cache partitioning, and we are the only work that takes advantage of knowledge of Intel's LLC Complex Addressing for memory management and allocation.…”
Section: Related Workmentioning
confidence: 99%
“…However, these works will not be as effective as before on newer architectures (e.g., Haswell and Skylake), as the mapping between LLC slices and physical addresses changes at a finer granularity than 4kpages. Furthermore, there are a series of works that proposed [6,51,75,76] or exploited [17,63,64,[81][82][83] hardware-based cache partitioning to better use the LLC in order to improve performance. To the best of our knowledge, none of these works considered LLC slice-aware memory management, or slice-aware cache partitioning, and we are the only work that takes advantage of knowledge of Intel's LLC Complex Addressing for memory management and allocation.…”
Section: Related Workmentioning
confidence: 99%
“…When hyper-threading is disabled, interference harms envy-freeness by 18% instead of 32%. Second, we could deploy new microarchitectures that guarantee isolation for the last-level cache and memory channel [26,45,57,65]. Finally, agents could continuously update their utility profiles and re-optimize their thresholds.…”
Section: Sensitivity To Interferencementioning
confidence: 99%
“…The behavior of cache determines system performance due to its ability to bridge the speed gap between the processor and main memory. To tolerate the memory access latency, there have been a plethora of proposals for data prefetching [2][3][4][5][6][7][8][9][10]. Data prefetching techniques improve performance by predicting future memory accesses and fetch them in cache before they are accessed.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, with the advent of chip multiprocessors (CMP) architectures, thread-based prefetching and speculative execution techniques have received much attention in the research community [2,6,9]. One novel method, called helper threaded prefetching, utilizes a helper thread to boost the performance of the main thread by prefetching data into cache.…”
Section: Introductionmentioning
confidence: 99%