2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00937
|View full text |Cite
|
Sign up to set email alerts
|

Binary Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 32 publications
(19 citation statements)
references
References 25 publications
0
19
0
Order By: Relevance
“…• Model quantification: as a particular quantification technique, binarization skillfully compresses model parameters and graph features in GNNs to yield significant acceleration in inference. Binarized DGCNN [Bahri et al, 2021] and Bi-GCN [Wang et al, 2021a] similarly introduce binarization strategies into GNNs to speed up model execution and reduce memory consumption. Degree-quant [Tailor et al, 2021] proposes a quantization-aware training method for GNNs to enable model inference with low precision integer (INT8) arithmetic, achieving up to 4.7× speedup on CPU platform.…”
Section: Gnn Compressionmentioning
confidence: 99%
See 1 more Smart Citation
“…• Model quantification: as a particular quantification technique, binarization skillfully compresses model parameters and graph features in GNNs to yield significant acceleration in inference. Binarized DGCNN [Bahri et al, 2021] and Bi-GCN [Wang et al, 2021a] similarly introduce binarization strategies into GNNs to speed up model execution and reduce memory consumption. Degree-quant [Tailor et al, 2021] proposes a quantization-aware training method for GNNs to enable model inference with low precision integer (INT8) arithmetic, achieving up to 4.7× speedup on CPU platform.…”
Section: Gnn Compressionmentioning
confidence: 99%
“…A dynamic graph, however, is more flexible in topology and feature spaces than a static one, making it hard to apply these methods to dynamic graphs directly. Compression methods like KD-GCN and Binarized DGCNN [Bahri et al, 2021] utilize a special designed module to extend the use to dynamic graphs, providing an exemplar of dynamic graphs acceleration.…”
Section: Summary and Future Prospectsmentioning
confidence: 99%
“…proposed a GNN-tailored quantization algorithm, and used an automatic bit-selecting approach to pinpoint the most appropriate quantization bits. and Bahri et al (2020) further proposed binarized GNNs. Tailor et al (2021) proposed Degree-Quant, an architectureagnostic method for quantization-aware training on graphs.…”
Section: Quantizationmentioning
confidence: 99%
“…[49] applied bucketbased quantization of matrix-matrix products to accelerate the GCN [20] operator. [4] proposed a general framework for binarizing graph neural networks and specifically introduced efficient binarized versions of the Dynamic Edge-Conv operator [50] with real-world speed-ups on a lowpower device.…”
Section: Relational Inference and Gnnsmentioning
confidence: 99%
“…We benchmark our method against state-of-the-art approaches on the RoadTracer [5] dataset and find that it is orders of magnitude faster than recent competing methods while retaining competitive accuracy. With the increasing interest in deep learning inference on embedded hardware, such as satellites or drones [3], our method, combined with recent advances in CNN and GNN quantization [4,8], opens the possibility of on board in-flight road extraction, which could reduce ground computation and bandwidth needs by transmitting graphs instead of images.…”
Section: Introductionmentioning
confidence: 99%