Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467256
|View full text |Cite
|
Sign up to set email alerts
|

Scaling Up Graph Neural Networks Via Graph Coarsening

Abstract: Scalability of graph neural networks remains one of the major challenges in graph machine learning. Since the representation of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes from previous layers, the receptive fields grow exponentially, which makes standard stochastic optimization techniques ineffective. Various approaches have been proposed to alleviate this issue, e.g., sampling-based methods and techniques based on pre-computation of graph fil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 49 publications
(17 citation statements)
references
References 16 publications
0
17
0
Order By: Relevance
“…Furthermore, another possible solution to mitigate the cost of eigendecomposition of S is to use graph coarsening to reduce the size of graph or graph partition to partition a large graph into several small graphs without losing much information. Several works have proposed on this topics [3,12]. Worth note that graph partition is also used in a well-known work Cluster-GCN [5] which focuses on scaling up GCNs to a large graph with millions nodes.…”
Section: More Discussion About Limitationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, another possible solution to mitigate the cost of eigendecomposition of S is to use graph coarsening to reduce the size of graph or graph partition to partition a large graph into several small graphs without losing much information. Several works have proposed on this topics [3,12]. Worth note that graph partition is also used in a well-known work Cluster-GCN [5] which focuses on scaling up GCNs to a large graph with millions nodes.…”
Section: More Discussion About Limitationsmentioning
confidence: 99%
“…The time complexity of a plain full eigendecomposition of S is O(n 3 ). In some cases where the complexity of the preprocessing is of importance, we can consider using truncated eigendecomposition or graph coarsening [12] to mitigate the cost. Regarding this, we provide more discussions in Appendix D.…”
Section: Comparison With Other Gnns With Infinite Depthmentioning
confidence: 99%
“…The dataset sources are also supplemented for reproducibility. The research benchmark datasets used for evaluation in 3 The default values of 𝛼 and đť›˝ are 1 this work are Disease [4], Twitch_en [22], CoauthorCS [23] and two benchmark citation networks, Citeseer [37] and Cora [9]. In Disease, the label of a node indicates whether the node is infected by a disease or not, and the node features associate the susceptibility to the disease.…”
Section: Experiments 31 Experimental Setupmentioning
confidence: 99%
“…GNNbased methods formulate the link prediction as the binary classification problem in which the explicit node feature could be incorporated [9,24,26]. These techniques are typically evaluated on a limited number of regular benchmark datasets such as collaboration network [19] or citation networks [8]. However, many real-world scenarios, such as protein interaction networks [20], knowledge graphs [16], disease spreading network [4], transport networks [1], and social networks [3], are usually sparse or hierarchical in nature, which calls for special attentions [3,15,22].…”
Section: Base Stationmentioning
confidence: 99%