2022
DOI: 10.3389/fdata.2022.1029307
|View full text |Cite
|
Sign up to set email alerts
|

Auto-GNN: Neural architecture search of graph neural networks

Abstract: Graph neural networks (GNNs) have been widely used in various graph analysis tasks. As the graph characteristics vary significantly in real-world systems, given a specific scenario, the architecture parameters need to be tuned carefully to identify a suitable GNN. Neural architecture search (NAS) has shown its potential in discovering the effective architectures for the learning tasks in image and language modeling. However, the existing NAS algorithms cannot be applied efficiently to GNN search problem becaus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(8 citation statements)
references
References 36 publications
0
8
0
Order By: Relevance
“…In this part, our model employs AutoGNN with Explicit Link Information [46] algorithm to construct edge feature engineering of the multi-source heterogeneous network. The AutoGNN model can automate the appropriate GNN architecture design for the given data [47] and introduce edge embedding in an explicit way. The edge feature engineering consists of the message passing phase and readout phase.…”
Section: Edge Feature Representationmentioning
confidence: 99%
“…In this part, our model employs AutoGNN with Explicit Link Information [46] algorithm to construct edge feature engineering of the multi-source heterogeneous network. The AutoGNN model can automate the appropriate GNN architecture design for the given data [47] and introduce edge embedding in an explicit way. The edge feature engineering consists of the message passing phase and readout phase.…”
Section: Edge Feature Representationmentioning
confidence: 99%
“…Graph neural architecture search (GraphNAS) intends to automatically search for the most effective model architecture without human intervention (Qin et al 2022b;Zhang et al 2022bZhang et al , 2023. Existing GraphNAS methods can be categorized into reinforcement learning-based methods (Gao et al 2021b;Zhou et al 2022), evolutionary algorithms (Nunes and Pappa 2020;Shi et al 2022), and differentiable methods (Zhao et al 2020a,b). In recent years, scholars also explore how NAS performs under distribution shifts (Bai et al 2021;Qin et al 2022a).…”
Section: Related Workmentioning
confidence: 99%
“…It continuously generates descriptions of GNN architecture to find the optimal network architecture based on RL by maximizing the expected accuracy. Similarly to the architecture search in GraphNAS, Auto-GNN [63] additionally proposes a parameter sharing mechanism for sharing the parameters in homogeneous architecture for reducing the computation cost. However, these methods are not combined with the subgraph method and their optimization is tightly coupled with specific datasets.…”
Section: Subgraph and Rl Based Approachesmentioning
confidence: 99%
“…To address these issues, the state-of-the-art works [20,27,52,62,63] adopt reinforcement learning (RL) to search the GNN architecture. However, such approaches sometimes lack generalizability; the effectiveness of determining the optimal GNN architecture is tightly bound to specific datasets and usually have huge search space, and hence low efficiency.…”
Section: Introductionmentioning
confidence: 99%