Proceedings of the 2004 Workshop on Memory System Performance - MSP '04 2004
DOI: 10.1145/1065895.1065906
|View full text |Cite
|
Sign up to set email alerts
|

Reuse-distance-based miss-rate prediction on a per instruction basis

Abstract: Feedback-directed optimization has become an increasingly important tool in designing and building optimizing compilers. Recently, reuse-distance analysis has shown much promise in predicting the memory behavior of programs over a wide range of data sizes. Reuse-distance analysis predicts program locality by experimentally determining locality properties as a function of the data size of a program, allowing accurate locality analysis when the program's data size changes.Prior work has established the effective… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
45
0

Year Published

2005
2005
2019
2019

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 42 publications
(45 citation statements)
references
References 20 publications
0
45
0
Order By: Relevance
“…Previous work on computing cache miss predictions from memory reuse distance information has explored approaches that associate reuse distance data with either individual references [9], [13], groups of related references from the same loop [14], or an entire application [23]. Associating reuse distance data with a section of code, be it a reference, a loop or an entire application, is sufficient for computing the number of cache misses incurred by that piece of code.…”
Section: Understanding Data Reuse Patternsmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous work on computing cache miss predictions from memory reuse distance information has explored approaches that associate reuse distance data with either individual references [9], [13], groups of related references from the same loop [14], or an entire application [23]. Associating reuse distance data with a section of code, be it a reference, a loop or an entire application, is sufficient for computing the number of cache misses incurred by that piece of code.…”
Section: Understanding Data Reuse Patternsmentioning
confidence: 99%
“…These include investigating memory hierarchy management techniques [3], [16], characterizing data locality in program executions for individual program inputs [4], [7], and using memory reuse distance data from training runs to predict cache miss rate for other program inputs [9], [13], [23].…”
Section: Introductionmentioning
confidence: 99%
“…In case of miss event, a random place from the cache is picked, regardless of whether it originally contains valid data or not. 2 The original content at the place will be evicted to allow storing the address A and its corresponding value just retrieved through backup mechanism. Two nearest references to the same cache line forms a reuse window, with references to other cache lines in between.…”
Section: Introductionmentioning
confidence: 99%
“…Hence a setassociative cache of size S and associativity M is equivalent to S M fully-associative cache operating in parallel, each of size M . 2 Even if the replacement algorithm takes care to not evict valid data when there are free slots, as the cache is soon filled up with valid data, there will be no difference in practice. 3 Sometimes referred to as pseudo random replacement policy due to difficulty, if not impossibility, of obtaining true randomness.…”
Section: Introductionmentioning
confidence: 99%
“…A trace may not represent the program behavior on other inputs, and a trace may be too large to be analyzed. For many programs, earlier work has shown that the temporal locality follows a predictable pattern and the (cache miss) behavior of all program inputs can be predicted by examining medium-size training runs [14,15,26,34,44]. In this paper, we use a medium-size input for each program.…”
Section: Introductionmentioning
confidence: 99%