2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00659
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking Graph Neural Architecture Search from Message-passing

Abstract: Graph neural networks (GNNs) emerged recently as a standard toolkit for learning from data on graphs. Current GNN designing works depend on immense human expertise to explore different message-passing mechanisms, and require manual enumeration to determine the proper message-passing depth. Inspired by the strong searching capability of neural architecture search (NAS) in CNN, this paper proposes Graph Neural Architecture Search (GNAS) with novel-designed search space. The GNAS can automatically learn better ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(19 citation statements)
references
References 29 publications
0
19
0
Order By: Relevance
“…The majority of these methods focus on design the aggregation layers in GNNs with different search algorithms. For example, GraphNAS [12], Auto-GNN [59], AutoGM [50], DSS [26] and [32] learn to design aggregation layers with diverse dimensions, such as attention function, attention head number, embedding size, etc; SANE [58], SNAG [57] and AutoGraph [24] provide the extra skip connections learning; GNAS [3] and Policy-GNN [22] learn to select the best message passing layers. Apart from design aggregation layers, RE-MPNN [19] learns adaptive global pooling functions additionally.…”
Section: Graph Neural Architecture Searchmentioning
confidence: 99%
See 1 more Smart Citation
“…The majority of these methods focus on design the aggregation layers in GNNs with different search algorithms. For example, GraphNAS [12], Auto-GNN [59], AutoGM [50], DSS [26] and [32] learn to design aggregation layers with diverse dimensions, such as attention function, attention head number, embedding size, etc; SANE [58], SNAG [57] and AutoGraph [24] provide the extra skip connections learning; GNAS [3] and Policy-GNN [22] learn to select the best message passing layers. Apart from design aggregation layers, RE-MPNN [19] learns adaptive global pooling functions additionally.…”
Section: Graph Neural Architecture Searchmentioning
confidence: 99%
“…Representative methods DARTS [27] and SNAS [45] use the Softmax and the Gumble-Softmax functions as the relaxation function, respectively. The differentiable search algorithms are used in SANE [58], DSS [26] and GNAS [3] to relax the aggregation dimensions. However, it is difficult to relax the pooling operations because different candidate pooling operations generate different coarse graphs consisting of diverse nodes and edges.…”
Section: Graph Neural Architecture Searchmentioning
confidence: 99%
“…Considering the search efficiency, the differentiable algorithm is proposed to search architectures with gradient descent. It relaxes the discrete search space into continuous and then treats the architecture search problem as a bi-level optimization problem [3,28,29,59].…”
Section: Graph Neural Architecture Searchmentioning
confidence: 99%
“…We provide the performance comparisons of the GCNII, PNA and MixHop baselines used in our experiment and used in PyG 3 . As shown in Table 7, our baselines can achieve considerable performance on top of the unified framework with the same evaluation stage which will be introduced in the following.…”
Section: B Details Of Experiments B1 Baselinesmentioning
confidence: 99%
“…Gabriele et al [2] propose Principal Neighbourhood Aggregation (PNA) by integrating multiple aggregators (e.g., mean, max, min) together via degree-scalers. Cai et al [1] propose Graph Neural Architecture Search (GNAS) to learn the optimal depth of message passing with max and sum neighbor aggregations.…”
Section: Introductionmentioning
confidence: 99%