Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2020
DOI: 10.1145/3394486.3403088
|View full text |Cite
|
Sign up to set email alerts
|

Policy-GNN: Aggregation Optimization for Graph Neural Networks

Abstract: Graph data are pervasive in many real-world applications. Recently, increasing attention has been paid on graph neural networks (GNNs), which aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors with stackable network modules. Motivated by the observation that different nodes often require different iterations of aggregation to fully capture the structural information, in this paper, we propose to explicitly sample diverse iterations of agg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 67 publications
(35 citation statements)
references
References 25 publications
0
35
0
Order By: Relevance
“…RS involves generating random submodels from the search space. RS has been tested by some researchers [24,46,66] for Graph-NAS. Unfortunately, even if this method can provide good results in terms of efficiency, it is not widely adopted because of its hazardous results, which are often less optimal than those generated by other methods.…”
Section: Random Searchmentioning
confidence: 99%
See 1 more Smart Citation
“…RS involves generating random submodels from the search space. RS has been tested by some researchers [24,46,66] for Graph-NAS. Unfortunately, even if this method can provide good results in terms of efficiency, it is not widely adopted because of its hazardous results, which are often less optimal than those generated by other methods.…”
Section: Random Searchmentioning
confidence: 99%
“…However, this method entails the use of hundreds of GNN performance distribution and graph data characteristics to build a neural predictor. One can also use the buffer mechanism for graph representation learning [66] .…”
Section: Performance Evaluationmentioning
confidence: 99%
“…Overview. To address these problems, we propose a novel Recursive and Scalable Reinforcement Learning framework RSRL, upon traditional Reinforcement Learning based approaches [21,51,63], to not only update strategies through the learning environment but also the recursive structure can be used to quickly and accurately meet the accuracy requirements of different relations. Figure . 4 depicts the forest-based learning architecture.…”
Section: Similarity-aware Adaptive Neighbor Selectormentioning
confidence: 99%
“…This model uses a recurrent network to generate variable-length strings that describe the architectures of graph neural networks, and then trains the recurrent network with reinforcement learning to maximize the expected accuracy of the generated architectures. • Policy-GNN [51]: is a meta-policy framework that adaptively learns an aggregation policy to sample diverse iterations of aggregations for different nodes. To accelerate the learning process, we also use a buffer mechanism to enable batch training and parameter sharing mechanism to decrease the training cost.…”
Section: Multi-layer Rl Module Action Space Recursion Inter-aggmentioning
confidence: 99%
See 1 more Smart Citation