Proceedings of the 2023 ACM/SIGDA International Symposium on Field Programmable Gate Arrays 2023
DOI: 10.1145/3543622.3573152
|View full text |Cite
|
Sign up to set email alerts
|

Graph-OPU: An FPGA-Based Overlay Processor for Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Hence, many efforts have been made to develop fair GNNs. According to the stage at which the debiasing process occurs, the existing methods could be split into the pre-processing, in-processing, and post-processing methods [3]. Pre-processing methods remove bias before GNN training occurs by targeting the input graph structure, input features, or both.…”
Section: Fairness Of Graphmentioning
confidence: 99%
See 1 more Smart Citation
“…Hence, many efforts have been made to develop fair GNNs. According to the stage at which the debiasing process occurs, the existing methods could be split into the pre-processing, in-processing, and post-processing methods [3]. Pre-processing methods remove bias before GNN training occurs by targeting the input graph structure, input features, or both.…”
Section: Fairness Of Graphmentioning
confidence: 99%
“…Debiasing for a specific task in the pre-training phase is inflexible, and maintaining a specific PGM for each task is inefficient. Besides, most existing fairness methods lack theoretical analysis and guarantees [3,20], meaning that they do not provide a practical guarantee, i.e., provable lower bounds on the fairness of model prediction. This is significant for determining whether to deploy models in practical scenarios [5,19,34,35].…”
Section: Introductionmentioning
confidence: 99%
“…Quantization is an effective approach to overcome the challenges posed by graph data, not only for embedded devices but also for GPU-based applications. Tango [76] aims to accelerate GNN training on GPU systems using techniques such as GEMM, SPMM, and SDDMM. The authors propose a set of rules and stochastic rounding for GPUs to speed up training without compromising accuracy.…”
Section: Quantization Approaches For Gnnsmentioning
confidence: 99%