Asia and South Pacific Conference on Design Automation, 2006.
DOI: 10.1109/aspdac.2006.1594733
|View full text |Cite
|
Sign up to set email alerts
|

Parlgran: parallelism granularity selection for scheduling task chains on dynamically reconfigurable architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(21 citation statements)
references
References 11 publications
0
21
0
Order By: Relevance
“…Good speedups over the DSP processor are shown (between 3 and 20 times). Molen [59], ROCCC [54], DRESC [55], PARLGRAN [56] and the approaches presented in [57,58] are representative examples that exploit the parallelism of loops by using techniques such as loop unrolling and software pipelining to boost the performance of applications.…”
Section: Traditional Reconfigurable Systems and Hls Synthesismentioning
confidence: 99%
“…Good speedups over the DSP processor are shown (between 3 and 20 times). Molen [59], ROCCC [54], DRESC [55], PARLGRAN [56] and the approaches presented in [57,58] are representative examples that exploit the parallelism of loops by using techniques such as loop unrolling and software pipelining to boost the performance of applications.…”
Section: Traditional Reconfigurable Systems and Hls Synthesismentioning
confidence: 99%
“…Previous work [2] has also attempted to exploit data-parallelism with partial RTR with suitable task workload selection -however, their principles are not applicable when bandwidth is limited.…”
Section: Introductionmentioning
confidence: 98%
“…The goal in [13] is to define a specific methodology for scheduling the tasks of these applications in order to reduce the overall completion time. The same authors present in [7] an enhanced solution for the same problem: PARLGRAN tries to reduce the total execution time using two different techniques. The former one is called simple fragmentation reduction and it places the new task in the first available area on the FPGA in the opposite side of the FPGA with respect to the location of the previous task.…”
Section: Related Workmentioning
confidence: 99%
“…Module reuse means that two tasks of the same type have the possibility to be executed exactly on the same module on board, hiding completely the reconfiguration time. Configuration prefetching and module reuse are combined with time partitioning techniques to optimize the latency of the application [7] [8] [9]. Anti-fragmentation techniques avoid the fragmentation of the available space on board trying to maximize the dimension of free adjacent areas.…”
Section: Introductionmentioning
confidence: 99%