2023
DOI: 10.1109/tc.2022.3197083
|View full text |Cite
|
Sign up to set email alerts
|

GRIP: A Graph Neural Network Accelerator Architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(16 citation statements)
references
References 17 publications
0
16
0
Order By: Relevance
“…Figure 2 presents a baseline GCN accelerator comprising aggregation engines and combination engines, similar to previous work [8,1 Normalized Laplacian refers to a form of adjacency matrix defined as à = 𝐼 − 𝐷 −1/2 𝐴𝐷 −1/2 where 𝐷 is a degree matrix. 38,50,75], where the green component has been newly added that will be discussed in Section 4.3. The core of combination engine is a systolic array to support efficient matrix multiplication.…”
Section: Baseline Gcn Acceleratormentioning
confidence: 99%
“…Figure 2 presents a baseline GCN accelerator comprising aggregation engines and combination engines, similar to previous work [8,1 Normalized Laplacian refers to a form of adjacency matrix defined as à = 𝐼 − 𝐷 −1/2 𝐴𝐷 −1/2 where 𝐷 is a degree matrix. 38,50,75], where the green component has been newly added that will be discussed in Section 4.3. The core of combination engine is a systolic array to support efficient matrix multiplication.…”
Section: Baseline Gcn Acceleratormentioning
confidence: 99%
“…GNN systems adopt the idea of partitioning because the input graphs are unlikely to fit in a single machine's memory. Some systems use traditional edge-cut or vertex-cut methods [130,213] whereas others combine those with features like a cost model [87], feasibility score [111] or dataflow partitioning [95]. Table 3 summarizes the different partitioning methods.…”
Section: Partitioningmentioning
confidence: 99%
“…This partitioning strategy benefits the edge-wise processing, because only the source and destination vertex data need to be loaded. Unlike NeuGraph, GReTA [95] does not partition the graph itself, but the dataflow into blocks. The dataflow, also called nodeflow, is a graph structure representing the propagation of feature vectors throughout the forward pass of the GNN model.…”
Section: ✓ ✓ ✓mentioning
confidence: 99%
See 1 more Smart Citation
“…GRIP. GRIP [185] points out that there are two modes of computation involved in GCN inference, leading to inefficiency and high latency on existing accelerators. To solve this problem, GRIP first decomposes GCN inference into two parts including the execution of edge-centric and vertex-centric and further designs specialized units to accelerate each part.…”
Section: Graph Learning Acceleratorsmentioning
confidence: 99%