2006
DOI: 10.1109/micro.2006.7
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Caches: Effective Shaping of Cache Behavior to Workloads

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2007
2007
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 70 publications
(38 citation statements)
references
References 21 publications
0
35
0
Order By: Relevance
“…The latter employs additional components including a skewed bloom filter in conjunction with a pipelined priority heap to identify and retain Copyright c 2013 The Institute of Electronics, Information and Communication Engineers the blocks that most frequently missed in the conventional part of the cache in the recent past. More adaptive cache management approaches can be found in [6], [7]. Although they successfully combine LRU and LFU replacement policies together via additional data structures, they significantly increase hardware overhead and complexity.…”
Section: Existing Solutionsmentioning
confidence: 99%
“…The latter employs additional components including a skewed bloom filter in conjunction with a pipelined priority heap to identify and retain Copyright c 2013 The Institute of Electronics, Information and Communication Engineers the blocks that most frequently missed in the conventional part of the cache in the recent past. More adaptive cache management approaches can be found in [6], [7]. Although they successfully combine LRU and LFU replacement policies together via additional data structures, they significantly increase hardware overhead and complexity.…”
Section: Existing Solutionsmentioning
confidence: 99%
“…Workloads with little sharing can benefit from dynamic migration [15], but aggressive sharing requires careful tradeoff between replication and migration. Furthermore, LRU-based cache replacement performs well for workloads with good temporal locality, while frequency-based policies (e.g., LFU) are more suitable for workloads with poor locality [142].…”
Section: Diverse Workload Characteristicsmentioning
confidence: 99%
“…LRU is arguably the most widely used policy because its implementation is simpler than LFU and it can quickly adapt to working set changes. To provide good caching performance for a wide range of workloads, many software policies (e.g., [77,106]) and hardware designs (e.g., [42,142]) are proposed to combine the benefits of both LRU and LFU. CC achieves the same goal, but instead by using cache partitioning to isolate workloads with weak locality from those with good locality and by integrating LRU replacement with cache partitioning.…”
Section: Cache Replacement and Placementmentioning
confidence: 99%
“…The recently proposed adaptive replacement policy [12] provides support for multiple replacement policies and dynamically picks the best performing policy on a per set basis. They conclude that a combination of LRU and LFU performs fairly well for an L2 cache.…”
Section: Related Workmentioning
confidence: 99%
“…The placement mechanism and associativity are simultaneously tuned in Recent proposals like the V-way cache [10] and adaptive cache compression [1] simultaneously tune the placement mechanism and associativity by decoupling the tag portion of the cache from the data portion. On the other hand, proposals like the dynamic insertion policy [11] and adaptive replacement [12] are examples of schemes that modify the replacement mechanism. While placement and associativity are critical path functionalities and have a direct impact on access time, replacement decisions are made off the critical path.…”
Section: Introductionmentioning
confidence: 99%