10th International Symposium on High Performance Computer Architecture (HPCA'04)
DOI: 10.1109/hpca.2004.10029
|View full text |Cite
|
Sign up to set email alerts
|

Processor Aware Anticipatory Prefetching in Loops

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…Figure 2 shows the overall algorithm. In Figure 2(a), a loop hierarchy tree is first built for the whole program, followed by reuse analysis and prefetch candidate identification [7] to introduce only necessary prefetches and avoid issuing redundant ones. The function prefetching using helper thread driver is then called recursively to identify candidates and generate codes for helper threading.…”
Section: Overviewmentioning
confidence: 99%
See 4 more Smart Citations
“…Figure 2 shows the overall algorithm. In Figure 2(a), a loop hierarchy tree is first built for the whole program, followed by reuse analysis and prefetch candidate identification [7] to introduce only necessary prefetches and avoid issuing redundant ones. The function prefetching using helper thread driver is then called recursively to identify candidates and generate codes for helper threading.…”
Section: Overviewmentioning
confidence: 99%
“…These prefetches can bring data to the shared L2 cache, representing a potentially significant execution time saving for the main thread. The above load or store is called an effective prefetch candidate, if its address computation depends on at least another load in the same loop body or the load/store is identified as a prefetch candidate by using reuse analysis [7].…”
Section: Selecting Candidate Loopsmentioning
confidence: 99%
See 3 more Smart Citations