2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA) 2023
DOI: 10.1109/hpca56546.2023.10071015
|View full text |Cite
|
Sign up to set email alerts
|

FlowGNN: A Dataflow Architecture for Real-Time Workload-Agnostic Graph Neural Network Inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…First, we manually translate the sPHENIX model into synthesizable C code and feed it into the HLS tool, Vitis HLS [6]. Then, we perform hardware optimizations of the model in HLS following the FlowGNN architecture [7], which is the state-of-the-art GNN architecture on FPGA.…”
Section: Generation Of the Gnn Ip Corementioning
confidence: 99%
See 1 more Smart Citation
“…First, we manually translate the sPHENIX model into synthesizable C code and feed it into the HLS tool, Vitis HLS [6]. Then, we perform hardware optimizations of the model in HLS following the FlowGNN architecture [7], which is the state-of-the-art GNN architecture on FPGA.…”
Section: Generation Of the Gnn Ip Corementioning
confidence: 99%
“…The current TrackGNN model in sPHENIX we are using has one GNN layer, which includes 4 multi-layer perceptron (MLP) layers for both node and edge embedding with a dimension of 8. The proposed architecture follows the message-passing framework in FlowGNN [7]: the node embeddings are processed first, followed by an adapter to orchestrate the node information to the correct edge processing units for edge embedding computation and message aggregation. We also use quantization to reduce the data precision and to reduce the memory and computation requirements.…”
Section: Generation Of the Gnn Ip Corementioning
confidence: 99%
“…FlowGNN [120] is proposed to support generic GNN models for real-time inference applications. By introducing explicit message passing and multi-level parallelism, the authors provide a comprehensive solution for GNN acceleration without sacrificing adaptability.…”
Section: Framework For Fpga-based Acceleratorsmentioning
confidence: 99%
“…Traditional Central Processing Units (CPUs), initially designed for sequential tasks, started incorporating SIMD-based graph extensions to enhance parallel processing capabilities [215].Graphics Processing Units (GPUs), with their inherent parallelism, were enhanced with kernel support tailored specifically for graph algorithms [148,174]. Beyond these general-purpose processors, the industry also witnessed the advent of domain-specific accelerators [86,115,153,202], specifically crafted to speedup graph computations, addressing the unique challenges and demands that graph algorithms present.…”
Section: Background and Motivationmentioning
confidence: 99%
“…FlowGNN [153]: FlowGNN introduces a dataflow architecture tailored for the acceleration of GNNs that utilize message-passing mechanisms. The FlowGNN architecture is scalable and supports a broad spectrum of GNN models, featuring a configurable dataflow that simultaneously computes node and edge embeddings as well as facilitates message passing, making it universally applicable across different models.…”
Section: Awb-gcn [65]mentioning
confidence: 99%