2022
DOI: 10.48550/arxiv.2204.13103
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming

Abstract: Graph neural networks (GNNs) have recently exploded in popularity thanks to their broad applicability to graph-related problems such as quantum chemistry, drug discovery, and high energy physics. However, meeting demand for novel GNN models and fast inference simultaneously is challenging because of the gap between developing efficient accelerators and the rapid creation of new GNN models. Prior art focuses on the acceleration of specific classes of GNNs, such as Graph Convolutional Network (GCN), but lacks th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…Results show their designs achieve millisecond level latency. They also propose FlowGNN [43] which can flexibly support the majority of message-passing GNNs. [11] proposes a GCN accelerator named GROW with Gustavson's algorithm to architect a sparse-dense GEMM accelerator with row-wise product.…”
Section: Related Workmentioning
confidence: 99%
“…Results show their designs achieve millisecond level latency. They also propose FlowGNN [43] which can flexibly support the majority of message-passing GNNs. [11] proposes a GCN accelerator named GROW with Gustavson's algorithm to architect a sparse-dense GEMM accelerator with row-wise product.…”
Section: Related Workmentioning
confidence: 99%
“…FlowGNN ( Sarkar et al, 2022 ) proposes a general-purpose GNN acceleration framework using high-level synthesis to deal with the imbalanced development between new GNN algorithms and new accelerators. Unlike previous class-specific GNN model accelerators, FlowGNN supports edge embeddings for widely popular GNN models and can be extended to new models.…”
Section: Fpga Based Hardware Acceleratorsmentioning
confidence: 99%
“…Low-latency GNN inference is needed in many real-world applications, such as traffic prediction [1], scientific simulation [2], etc. While many techniques [3], [4], [5], [6], [7], [8], [9] have been proposed to accelerate GNN inference, no work has systematically studied the data sparsity in GNNs to reduce the inference latency. GNNs ( [10], [11]) involve various computation kernels, where there are three types of data sparsity:…”
Section: Introductionmentioning
confidence: 99%