Proceedings of the 37th Annual International Symposium on Computer Architecture 2010
DOI: 10.1145/1815961.1815971
|View full text |Cite
|
Sign up to set email alerts
|

High performance cache replacement using re-reference interval prediction (RRIP)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
438
0
6

Year Published

2011
2011
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 479 publications
(445 citation statements)
references
References 21 publications
1
438
0
6
Order By: Relevance
“…Dynamic insertion or replacement policies have been proposed where the insert position of incoming blocks dynamically changes based on the characteristics of applications, such as working set size or whether a replacement block is dynamically selected with a reuse pattern of blocks [3,4,5]. A partition mechanism based on the utilization efficiency allocated the LLC by hit monitoring of each core [2].…”
Section: Relate Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Dynamic insertion or replacement policies have been proposed where the insert position of incoming blocks dynamically changes based on the characteristics of applications, such as working set size or whether a replacement block is dynamically selected with a reuse pattern of blocks [3,4,5]. A partition mechanism based on the utilization efficiency allocated the LLC by hit monitoring of each core [2].…”
Section: Relate Workmentioning
confidence: 99%
“…To overcome this shortcoming, various advanced cache management mechanisms are investigated to improve cache performance [2,3,4,5,6].…”
Section: Introductionmentioning
confidence: 99%
“…Even though it has an important role to hide the long memory access latency in a memory system, it has been revealed in many studies that LLC becomes less efficient as capacity increases ( [1,2]). This issue in the 2D off-chip memory system will be still maintained even in the 3D-ICs based memory system.…”
Section: Introductionmentioning
confidence: 99%
“…Upon re-reference to the block, the LRU policy updates its prediction and again anticipates that the block has a near-immediate re-reference interval. In effect, the LRU replacement policy predicts that cache lines are re-referenced in reverse order of reference, i.e., the LRU policy predicts that a most recently used cache line will be re-referenced much sooner than less recently used cache lines [28].…”
Section: Introductionmentioning
confidence: 99%
“…According to prior research, there are four common kinds [28] of cache access patterns in applications: recency-friendly access patterns (for any k) ða 1 ; a 2 ; . .…”
Section: Introductionmentioning
confidence: 99%