2004
DOI: 10.1145/1037187.1024416
|View full text |Cite
|
Sign up to set email alerts
|

Compiler orchestrated prefetching via speculation and predication

Abstract: This paper introduces a compiler-orchestrated prefetching system as a unified framework geared toward ameliorating the gap between processing speeds and memory access latencies. We focus the scope of the optimization on specific subsets of the program dependence graph that succinctly characterize the memory access pattern of both regular array-based applications and irregular pointer-intensive programs. We illustrate how program embedded precomputation via speculative execution can accurately predict and effec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 48 publications
0
2
0
Order By: Relevance
“…A promising approach to boosting I/O performance is to increase I/O bandwidth by prefetching data sets at various places, including storage servers, clients, and proxies. Prefetching techniques found in the literature take either softwarebased or hardware-based approaches [21]. Software-based prefetching schemes depend on software to detect regular data access patterns, whereas hardware-based approaches rely on hardware to reduce data access penalty [22] [23].…”
Section: Related Workmentioning
confidence: 99%
“…A promising approach to boosting I/O performance is to increase I/O bandwidth by prefetching data sets at various places, including storage servers, clients, and proxies. Prefetching techniques found in the literature take either softwarebased or hardware-based approaches [21]. Software-based prefetching schemes depend on software to detect regular data access patterns, whereas hardware-based approaches rely on hardware to reduce data access penalty [22] [23].…”
Section: Related Workmentioning
confidence: 99%
“…This decreases the observed latency, increases memory level parallelism, and allows cache-hit dominated performance even when the working set is larger than the cache. Software based prefetching [4,18,15,31,14,6,11,20] has been shown to be a promising technique to address this issue, and all modern high-performance instruction set architectures provide support for software prefetching.…”
Section: Introductionmentioning
confidence: 99%