2018 IEEE International Symposium on High Performance Computer Architecture (HPCA) 2018
DOI: 10.1109/hpca.2018.00034
|View full text |Cite
|
Sign up to set email alerts
|

GDP: Using Dataflow Properties to Accurately Estimate Interference-Free Performance at Runtime

Abstract: Abstract-Multi-core memory systems commonly share resources between processors. Resource sharing improves utilization at the cost of increased inter-application interference which may lead to priority inversion, missed deadlines and unpredictable interactive performance. A key component to effectively manage multi-core resources is performance accounting which aims to accurately estimate interference-free application performance. Previously proposed accounting systems are either invasive or transparent. Invasi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

4
4

Authors

Journals

citations
Cited by 16 publications
(28 citation statements)
references
References 63 publications
0
28
0
Order By: Relevance
“…To predict the LLC misses for all possible replication degrees we devise a light-weight mechanism we call the Replication Degree Directory (RDD). The RDD is inspired by the Auxiliary Tag Directory (ATD) [55] which is an independent tag directory commonly used to predict per-application cache misses as a function of allocated ways in shared LLCs (see e.g., [27], [76]). Unlike an ATD, the RRD is (i) located within an MCrouter rather than an LLC slice, and (ii) predicts misses across replication degrees and not miss curves.…”
Section: B Predicting Llc Missesmentioning
confidence: 99%
See 1 more Smart Citation
“…To predict the LLC misses for all possible replication degrees we devise a light-weight mechanism we call the Replication Degree Directory (RDD). The RDD is inspired by the Auxiliary Tag Directory (ATD) [55] which is an independent tag directory commonly used to predict per-application cache misses as a function of allocated ways in shared LLCs (see e.g., [27], [76]). Unlike an ATD, the RRD is (i) located within an MCrouter rather than an LLC slice, and (ii) predicts misses across replication degrees and not miss curves.…”
Section: B Predicting Llc Missesmentioning
confidence: 99%
“…Herrero et al [24] use distributed cache partitioning to optimize cache use, and MorphCache [60] dynamically alters the cache topology to enable sharing multiple cache slices between cores. GDP [27] allocates LLC capacity to processes based on slowdown predictions, while Rolan et al [57] propose adaptive set-granular cooperative caching. These works are not directly applicable to GPUs as they exploit that different threads (processes) in multi-threaded (multiprogrammed) workloads have different memory requirements.…”
Section: Related Workmentioning
confidence: 99%
“…Abstract system-level simulators have long been used in the architecture and design automation communities for performance estimation and analysis [11,19,27,38,39,42,44,49,52]. In particular, [12] used system simulation to evaluate the interaction between the OS and a 10 Gbit/s Ethernet NIC.…”
Section: Related Workmentioning
confidence: 99%
“…Enforcing fairness/QoS requires understanding how interference affects the performance of co-running applications. More specifically, we need to predict the performance reduction (slowdown) during multitasking (shared mode) compared to an ideal configuration (private mode) where the application runs alone with exclusive access to all compute and memory system resources [10]. Using shared mode quantities (e.g., shared mode bandwidth utilization) as proxies for private mode quantities (e.g., private mode bandwidth utilization) is typically inaccurate since interference can change application resource consumption significantly.…”
Section: Introductionmentioning
confidence: 99%
“…Broadly speaking, slowdown prediction models can be classified as white-box [10,11,14] versus black-box [15,16]. White-box models are derived from fundamental architectural insights which enables them to, in theory, precisely capture key performance-related behavior.…”
Section: Introductionmentioning
confidence: 99%