2020
DOI: 10.48550/arxiv.2006.06608
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs

Abstract: As the emerging trend of the graph-based deep learning, Graph Neural Networks (GNNs) recently attract a significant amount of research attention from various domains. However, existing GNN implementations fail to catch up with the evolving GNN architectures, the ever-increasing graph size, and node-embedding dimensionality, thus, suffering from an unsatisfied performance. To break this hurdle, we propose GN-NAdvisor, an efficient runtime system to systematically accelerate GNN applications on GPUs. First, GNNA… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(32 citation statements)
references
References 37 publications
0
32
0
Order By: Relevance
“…In response to the challenges of GNN computing, several works have surfaced that attempt to improve the performance and efficiency of GNNs either from a software perspective, i.e. adapting the operations to better match the capabilities of CPUs or GPUs [37]- [39]; or from a hardware perspective, i.e. designing custom processors tailored to the demands of GNNs [40]- [43].…”
Section: Study [Reference] (Year) Contributionsmentioning
confidence: 99%
“…In response to the challenges of GNN computing, several works have surfaced that attempt to improve the performance and efficiency of GNNs either from a software perspective, i.e. adapting the operations to better match the capabilities of CPUs or GPUs [37]- [39]; or from a hardware perspective, i.e. designing custom processors tailored to the demands of GNNs [40]- [43].…”
Section: Study [Reference] (Year) Contributionsmentioning
confidence: 99%
“…The acceleration of GNN workloads is an active area of research that distinguishes between software and hardware acceleration [2]. On the one hand, software acceleration for GNNs aims at exploiting the knowledge of the graph properties to better adapt the workload to the underlying hardware [4], [10], [14], [16], [20], [28], [34]- [37], [45]. This includes techniques such as intelligent partitioning [34], sparsity-aware workload management [37], vertex reordering [4], or the caching of partial aggregations to avoid redundant sums [16].…”
Section: Related Workmentioning
confidence: 99%
“…On the one hand, software acceleration for GNNs aims at exploiting the knowledge of the graph properties to better adapt the workload to the underlying hardware [4], [10], [14], [16], [20], [28], [34]- [37], [45]. This includes techniques such as intelligent partitioning [34], sparsity-aware workload management [37], vertex reordering [4], or the caching of partial aggregations to avoid redundant sums [16]. These techniques are either specific for GPUs, such as the dataflow constructs in Neugraph [28], or orthogonal to the dataflow approach.…”
Section: Related Workmentioning
confidence: 99%
“…There exist several recent frameworks for computations on GNNs [35,42,44,45,51,56,80,85,89,92,98]. While we use Pytorch Geometric, motif prediction can be integrated into any of these frameworks to enhance their processing capabilities.…”
Section: Related Workmentioning
confidence: 99%