Graph Neural Networks: Foundations, Frontiers, and Applications 2022
DOI: 10.1007/978-981-16-6054-2_27
|View full text |Cite
|
Sign up to set email alerts
|

Graph Neural Networks in Urban Intelligence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…Neural network explainers are a crucial category of research tools designed to elucidate and explain the predictions generated by neural network models. The goal of these explainers is to generate explanatory results to help explain the model’s predictions [ 56 , 57 ]. For example, GNNexplainer [ 45 ] identifies the most influential nodes and edges in the subgraph by extracting local subgraphs around the target nodes and training additional explanatory modules.…”
Section: Related Workmentioning
confidence: 99%
“…Neural network explainers are a crucial category of research tools designed to elucidate and explain the predictions generated by neural network models. The goal of these explainers is to generate explanatory results to help explain the model’s predictions [ 56 , 57 ]. For example, GNNexplainer [ 45 ] identifies the most influential nodes and edges in the subgraph by extracting local subgraphs around the target nodes and training additional explanatory modules.…”
Section: Related Workmentioning
confidence: 99%
“…APPNP [31], JK [32], Geom‐GCN [16], SimP‐GCN [45], and CPGNN [18] aim to improve the feature propagation scheme within layers of the model. More recently, researchers are proposing to make GNN models deeper [27, 29, 30]. However, deeper models suffer from over‐smoothing, where after stacking many GNN layers, features of the node become indistinguishable from each other, and there is a drop in the performance of the model.…”
Section: Related Workmentioning
confidence: 99%
“…GCNII [27] use residual connections and identity mapping in GNN layers to enable deeper networks. RevGNN [29] uses deep reversible architectures and [30] uses noise regularisation to train deep GNN models. Recently researchers have proposed new datasets that do not follow homophily assumption in traditional GNN models.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…While oversmoothing deteriorates the performance in GNNs, efforts have been made to preserve the identity of individual messages by modifying the message passing scheme, such as introducing jump connections (Xu et al, 2018;Chen et al, 2020b), sampling neighboring nodes and edges (Rong et al, 2019;Feng et al, 2020), adding regularizations (Chen et al, 2020a;Zhou et al, 2020;Yang et al, 2021), and increasing the complexity of convolutional layers (Balcilar et al, 2021;Geerts et al, 2021;Bodnar et al, 2021;. Other methods try to trade-off graph smoothness with the fitness of the encoded features (Zhu et al, 2021; or postpone the occurrence of oversmoothing by mechanisms, such as residual networks (Li et al, 2021a; and the diffusion scheme (Chamberlain et al, 2021;Zhao et al, 2021).…”
Section: Oversmoothness In Graph Representationmentioning
confidence: 99%