2010
DOI: 10.1587/elex.7.850
|View full text |Cite
|
Sign up to set email alerts
|

A cache replacement policy to reduce cache miss rate for multiprocessor architecture

Abstract: Abstract:In this paper, a new cache replacement policy named Selection Alternative Replacement (SAR), which minimizes shared cache miss rate in chip multi-processor architecture, is proposed. A variety of cache replacement policies have been used for minimizing the cache misses. However, replacing cache items which have high utilization leads to additional cache misses. SAR policy stores the labels of discarded cache items and uses stored information to prevent additional cache misses. The results of experimen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…This analysis will provide clues into classifying SVM kernels and number of features across particular memory access behaviors in the future. Across positive results for 6 features, SVM-predicted bypass yields an average miss rate decrease of 6.72%, which compared to related work such as [9] which achieves a 6.01% average, shows that SVM-predicted bypass can provide cache utilization comparable to ad hoc replacement policy mechanisms.…”
Section: Resultsmentioning
confidence: 70%
See 1 more Smart Citation
“…This analysis will provide clues into classifying SVM kernels and number of features across particular memory access behaviors in the future. Across positive results for 6 features, SVM-predicted bypass yields an average miss rate decrease of 6.72%, which compared to related work such as [9] which achieves a 6.01% average, shows that SVM-predicted bypass can provide cache utilization comparable to ad hoc replacement policy mechanisms.…”
Section: Resultsmentioning
confidence: 70%
“…In [1], Kharbutli et al relaxed cache inclusion and bypassed data from L2 (LLC) only, requiring less hardware overhead. Similarly focused on LLC, [9] creates new replacement policy on last-level shared cache (L2) by adding two tables and counter. Later, Xiang et al [10] suggested that bypassing only never-reused lines is not enough; lines which are least reused should also be bypassed.…”
Section: Related Workmentioning
confidence: 99%
“…Some prioritize blocks based on their recency of access, such as LRU, SRRIP [14] and PDP [16]. Other use the frequency of accesses, like LFU, LFRU [17] and SAR [18]. The re-reference distance is the basis for Timekeeping [19], EHC [20] and Leeway [21].…”
Section: Replacement Policies For Shared Caches In Multicore Processorsmentioning
confidence: 99%
“…8 Mbytes) [7], utilizing last-level cache (LLC) is more flexible and promising. Prior LLC management schemes partition cache to isolate memory traffic with different characteristics and optimize resource allocation [8,9,10], or propose the optimal cache line replacement policy by accurately estimating reuse distance [11,12,13,14,15,16,17,18,19]. Some propose both partition and replacement scheme [20,21].…”
Section: Introductionmentioning
confidence: 99%