2009 18th International Conference on Parallel Architectures and Compilation Techniques 2009
DOI: 10.1109/pact.2009.40
|View full text |Cite
|
Sign up to set email alerts
|

Cache Sharing Management for Performance Fairness in Chip Multiprocessors

Abstract: Resource sharing can cause unfair and unpredictable performance of concurrently executing applications in Chip-Multiprocessors (CMP

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(21 citation statements)
references
References 17 publications
0
21
0
Order By: Relevance
“…Mutlu et al [6] propose a shared DRAM controller, with the aim of both improve program's performance and the QoS of programs. Zhou et al [7] introduces a model to analyze the perfor-mance impact of cache sharing, and proposes a mechanism of cache sharing management to provide performance fairness for concurrently executing applications.…”
Section: Related Workmentioning
confidence: 99%
“…Mutlu et al [6] propose a shared DRAM controller, with the aim of both improve program's performance and the QoS of programs. Zhou et al [7] introduces a model to analyze the perfor-mance impact of cache sharing, and proposes a mechanism of cache sharing management to provide performance fairness for concurrently executing applications.…”
Section: Related Workmentioning
confidence: 99%
“…A limitation of this mechanism is that it needs to know for every miss whether it is an inter-thread miss or intra-thread miss, which incurs significant hardware overhead. Zhou et al [2009] propose a complex set of counters to estimate the performance impact of inter-thread misses. Its accuracy quickly drops when sampling is employed though.…”
Section: Quantifying Impact Of Cache Sharingmentioning
confidence: 99%
“…A large body of recent work has focused on cache partitioning in multicore processors, see for example ;Iyer [2004], Iyer et al [2007], Jaleel et al [2008], Kim et al [2004], Nesbit et al [2007], Qureshi and Patt [2006], and Zhou et al [2009]. These proposals did not quantify per-thread progress, but aimed at improving multicore throughput while guaranteeing some level of fairness among coexecuting jobs.…”
Section: Cache Partitioningmentioning
confidence: 99%
“…Several groups have been proposing schemes for per-thread cycle accounting, see for example [5,7,8,12,13,16]. All of the proposals focused on multi-program workloads of independent single-threaded applications, and none focused on multi-threaded applications involving the performance impact of positive interference and spinning/yielding.…”
Section: Related Workmentioning
confidence: 99%