2020
DOI: 10.1007/978-3-030-44534-8_21
|View full text |Cite
|
Sign up to set email alerts
|

A CGRA Definition Framework for Dataflow Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Coarse-Grained Reconfigurable Architectures (CGRAs) are architectures that can selectively use or disuse various components, subsets of the architecture to gain much higher performance per Watt over a more diverse set of applications than conventional processors such as multi-core CPUs and GPUs (Choi and Kee, 2015). In particular, owing to the reconfigurability of the device topology, CGRAs are well-suited for dataflow programs, i.e., specifications for the connectivity of functional units that explicitly represent and correspond to the flow of data in a program (Charitopoulos and Pnevmatikatos, 2020). Archetypical in this class of programs are multi-layered Deep Neural Networks (DNNs), wherein individual layers potentially map to discrete subsets of the CGRA and with data flowing between them in the form of activations (Choi and Kee, 2015).…”
Section: An E2e Programming Model For Ai Engine Architecturesmentioning
confidence: 99%
“…Coarse-Grained Reconfigurable Architectures (CGRAs) are architectures that can selectively use or disuse various components, subsets of the architecture to gain much higher performance per Watt over a more diverse set of applications than conventional processors such as multi-core CPUs and GPUs (Choi and Kee, 2015). In particular, owing to the reconfigurability of the device topology, CGRAs are well-suited for dataflow programs, i.e., specifications for the connectivity of functional units that explicitly represent and correspond to the flow of data in a program (Charitopoulos and Pnevmatikatos, 2020). Archetypical in this class of programs are multi-layered Deep Neural Networks (DNNs), wherein individual layers potentially map to discrete subsets of the CGRA and with data flowing between them in the form of activations (Choi and Kee, 2015).…”
Section: An E2e Programming Model For Ai Engine Architecturesmentioning
confidence: 99%
“…The naive approach considers these nodes as separate, leading to CGRA designs that are harder to route due to cell heterogeneity. To address this, we include in the CSFD phase Node Merging; an algorithm that is designed to find whether two nodes with the same functionality should be merged under the same bit-width and what the optimal bit-width for the current application is described in detail in [27]. We use two metrics for Node Merging: the bit-width difference between the two nodes and the Percentage Gain of merging these two nodes.…”
Section: Cell Structure and Functionality Definition (Csfd) Phasementioning
confidence: 99%