2014
DOI: 10.1109/tc.2012.277
|View full text |Cite
|
Sign up to set email alerts
|

CLU: Co-Optimizing Locality and Utility in Thread-Aware Capacity Management for Shared Last Level Caches

Abstract: Abstract-Most chip-multiprocessors nowadays adopt a large shared last-level cache (SLLC). This paper is motivated by our analysis and evaluation of state-of-the-art cache management proposals which reveal a common weakness. That is, the existing alternative replacement policies and cache partitioning schemes, targeted at optimizing either locality or utility of co-scheduled threads, cannot deliver consistently the best performance under a variety of workloads. Therefore, we propose a novel adaptive scheme, cal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 18 publications
0
7
0
Order By: Relevance
“…In addition, a large body of work use ATDs and missminimization within their shared LLC partitioning schemes [8,23,31,34,42,43,44,45]. These are complementary to this work because using MCP would enable them to select partitions based on system performance rather than LLC misses.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, a large body of work use ATDs and missminimization within their shared LLC partitioning schemes [8,23,31,34,42,43,44,45]. These are complementary to this work because using MCP would enable them to select partitions based on system performance rather than LLC misses.…”
Section: Related Workmentioning
confidence: 99%
“…The second one is that the metadata index tree used by us is vastly different from traditional full-size index trees, we do not store the whole index tree and only store the frequently searched files' closest branches in memory. Recent studies [21,22] have shown that data which have been queried may also be requeried later, which means the query has a temporal locality and spatial locality. Thus we can consume a few memory spaces to store these query hotspots' adjacent index branches to achieve high performance index search.…”
Section: The Workflowmentioning
confidence: 99%
“…In this letter, based on curve fitted energy formulation, we propose a cache partitioning scheme, which assigns cache ways to applications. Curve fitting is utilized in many works about caches, it is used for cache modeling [5] and management [6] and shows good feasibility and accuracy. Our work starts from analyzing the characteristic of cache miss rate, then two types of lines are chosen to fit the curve of cache miss rate according to the cache size.…”
Section: Introductionmentioning
confidence: 99%