Proceedings of the Conference on Functional Programming Languages and Computer Architecture 1993
DOI: 10.1145/165180.165201
|View full text |Cite
|
Sign up to set email alerts
|

Generation and quantitative evaluation of dataflow clusters

Abstract: Multithreadedor hybrid von Neumann/dataflow execution models have an advantage over the fine-grain dataflow model in that they significantly reduce the run time overhead incurred by matching.In thw paper, we look at two issues related to the evaluation of a coarse-grain dataflow model of execution.The first issue concerns the compilation into a coarsegrain code from a fine-grain one. In this study, the concept of coarse-grain code is captured by clusters which can be thought of se mini-dataflow graphs which ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

1993
1993
2011
2011

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…It has also been considered by Roh et al [32], where they have performed simulations on parallel scheduling decisions for instruction sets of a functional language. Simple workloads are mapped to various simulated architectures, using a "mergeup" algorithm, which is equivalent to our LLS, and "mergedown" algorithm, which is equivalent to our HLS.…”
Section: Architecturementioning
confidence: 99%
“…It has also been considered by Roh et al [32], where they have performed simulations on parallel scheduling decisions for instruction sets of a functional language. Simple workloads are mapped to various simulated architectures, using a "mergeup" algorithm, which is equivalent to our LLS, and "mergedown" algorithm, which is equivalent to our HLS.…”
Section: Architecturementioning
confidence: 99%
“…The thread size, however, is limited by the last two objectives. In fact, it was reported in [15] that blind efforts to increase the thread size, even when they satisfy the nonblocking and parallelism objectives, can result in a decrease in overall performance. Larger threads tend to have larger numbers of inputs and can result in a larger input latency, defined as the time delay between the arrival of the first token to a thread instance and that of the last token, at which time the thread can start executing [16].…”
Section: Code Generationmentioning
confidence: 99%
“…Several projects use bottom-up, multithreaded code generation strategies based on dataflow graphs [16,32,13,37,27]. Most of these schemes generate sequential threads for programs written in Id [23], a nonstrict language.…”
Section: Related Workmentioning
confidence: 99%
“…Most of these schemes generate sequential threads for programs written in Id [23], a nonstrict language. The nonstrict semantics of Id requires a more careful partitioning strategy than what is required under the strict semantics of Sisal [18] to avoid deadlock [21,27].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation