2017
DOI: 10.1145/3140659.3080255
|View full text |Cite
|
Sign up to set email alerts
|

Stream-Dataflow Acceleration

Abstract: Demand for low-power data processing hardware continues to rise inexorably. Existing programmable and "general purpose" solutions (eg. SIMD, GPGPUs) are insufficient, as evidenced by the order-of-magnitude improvements and industry adoption of application and domain-specific accelerators in important areas like machine learning, computer vision and big data. The stark tradeoffs between efficiency and generality at these two extremes poses a difficult question: how could domain-specific hardware efficiency be a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
43
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 46 publications
(44 citation statements)
references
References 34 publications
0
43
0
1
Order By: Relevance
“…As a result, a considerable number of CGRAs adopt the other execution models that employ dynamic scheduling or dataflow mechanism to exploit dynamic parallelisms. These models enable high performance for more applications types and alleviate burdens on compilers [19,20,83,84]. The execution models of CGRAs are evolving from static scheduling to dynamic scheduling and from sequential execution to dataflow execution, as indicated in Table 2.…”
Section: Evolution Of Cgrasmentioning
confidence: 99%
See 4 more Smart Citations
“…As a result, a considerable number of CGRAs adopt the other execution models that employ dynamic scheduling or dataflow mechanism to exploit dynamic parallelisms. These models enable high performance for more applications types and alleviate burdens on compilers [19,20,83,84]. The execution models of CGRAs are evolving from static scheduling to dynamic scheduling and from sequential execution to dataflow execution, as indicated in Table 2.…”
Section: Evolution Of Cgrasmentioning
confidence: 99%
“…Polymorphic Pipeline Array supported fine-grained parallelism with software pipeline and coarse-grained pipeline parallelism, which come from ILP and TLP [54]. MLP and DLP support have become a topic in recent CGRAs for data-intensive domains [19,20]. However, there are few CGRAs that support speculative parallelism well, which is an important resource for parallelism exploration.…”
Section: Problem Formulationmentioning
confidence: 99%
See 3 more Smart Citations