Proceedings. 2005 IEEE International Conference on Field-Programmable Technology, 2005.
DOI: 10.1109/fpt.2005.1568552
|View full text |Cite
|
Sign up to set email alerts
|

Compiler-directed design space exploration for caching and prefetching data in high-level synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…Prior work using explicitly populated memory buffers in this way can be found in [3,14]. Such an approach allows the separation of a execution and communication processes, which enables memory optimizations not generally possible under a loop transformation approach.…”
Section: Memory Prefetching Using Scratchpad Memories Within Nestedloopsmentioning
confidence: 97%
“…Prior work using explicitly populated memory buffers in this way can be found in [3,14]. Such an approach allows the separation of a execution and communication processes, which enables memory optimizations not generally possible under a loop transformation approach.…”
Section: Memory Prefetching Using Scratchpad Memories Within Nestedloopsmentioning
confidence: 97%
“…3 and 4 (the data reuse options are numbered). c) Identification of the most promising reuse options 9) This step involves the identification of the most promising data reuse options for each array. Only data reuse options in which the reuse level arrays can be stored in different levels of the targeted physical memory hierarchy should be considered.…”
Section: ≤ |Ro a | ≤ Kmentioning
confidence: 99%
“…Data reuse is not exploited in most existing ASIC/FPGA hardware compilation environments. Data reuse exploitation targeting FPGAs limited to one level stored in registers is described in [8], and the approach prefetching and storing reused data in registers is presented in [9]. The approach described in [10] infers FPGA on-chip RAMs and shift registers to exploit data reuse.…”
Section: Introductionmentioning
confidence: 99%
“…If this code is evaluated with the initial indices (i = 0, j = 0), an address (R = 2) and a further two indices i = 1, j = 0 are generated. When these are iteratively evaluated using the same function, the sequence of addresses (2,4,6,8) is generated. This is exactly the memory addresses touched by the original input code, presented in a strictly increasing order and with all repetition removed.…”
Section: Outputmentioning
confidence: 99%