2019 IEEE 13th International Conference on ASIC (ASICON) 2019
DOI: 10.1109/asicon47005.2019.8983647
|View full text |Cite
|
Sign up to set email alerts
|

An FPGA Implementation of GCN with Sparse Adjacency Matrix

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 3 publications
0
3
0
Order By: Relevance
“…AWB-GCN [11] implements amount of process elements for multiply-accumulate-cell (MAC) and balance the workload of process elements to accelerate the sparse matrix multiplication. Other works [10,13,14] accelerate GCNs by designing efficient pipeline architecture, optimizing memory mode, and increasing parallelism.…”
Section: B Deep Learning Inference Acceleratorsmentioning
confidence: 99%
See 2 more Smart Citations
“…AWB-GCN [11] implements amount of process elements for multiply-accumulate-cell (MAC) and balance the workload of process elements to accelerate the sparse matrix multiplication. Other works [10,13,14] accelerate GCNs by designing efficient pipeline architecture, optimizing memory mode, and increasing parallelism.…”
Section: B Deep Learning Inference Acceleratorsmentioning
confidence: 99%
“…In contrast, there are two memory synchronizations between the three steps of each layer in GATs because of the masked self-attention mechanism [1]. The basic calculation of the existing works on GCNs and CNNs [10][11][12][13][14][15][16][17][18][19] has no change which still use multiplication with heavy dependence on DSPs. Moreover, the loss of accuracy is not detailed studied.…”
Section: B Deep Learning Inference Acceleratorsmentioning
confidence: 99%
See 1 more Smart Citation