2022
DOI: 10.1609/aaai.v36i8.20864
|View full text |Cite
|
Sign up to set email alerts
|

Graph Pointer Neural Networks

Abstract: Graph Neural Networks (GNNs) have shown advantages in various graph-based applications. Most existing GNNs assume strong homophily of graph structure and apply permutation-invariant local aggregation of neighbors to learn a representation for each node. However, they fail to generalize to heterophilic graphs, where most neighboring nodes have different labels or features, and the relevant nodes are distant. Few recent studies attempt to address this problem by combining multiple hops of hidden representations … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 26 publications
0
15
0
Order By: Relevance
“…Graphs in real-world applications often exhibit heterophily, where a node tend to connect to nodes of different classes or features, reflecting diverse relationships or interactions. Heterophily has received lots of attention in recent years [59,68], and existing heterophilic GNNs are generally designed from two perspectives: (1) adding new nodes to the neighborhood to augment the propagated message; (2) flexible aggregation of neighborhood messages. Multi-hop neighbors are explored in [1,22,51,72] to augment the propagated messages, which tend to be more robust than using one-hop neighbors alone.…”
Section: Heterophily In Gnnsmentioning
confidence: 99%
“…Graphs in real-world applications often exhibit heterophily, where a node tend to connect to nodes of different classes or features, reflecting diverse relationships or interactions. Heterophily has received lots of attention in recent years [59,68], and existing heterophilic GNNs are generally designed from two perspectives: (1) adding new nodes to the neighborhood to augment the propagated message; (2) flexible aggregation of neighborhood messages. Multi-hop neighbors are explored in [1,22,51,72] to augment the propagated messages, which tend to be more robust than using one-hop neighbors alone.…”
Section: Heterophily In Gnnsmentioning
confidence: 99%
“…Graph neural networks (GNNs) have recently become an important tool for graph representation learning. GNNs [ 11 , 28 , 29 , 30 , 31 ] mostly follow a recursive neighborhood scheme, using graph structural information to constrain each node to aggregate the attribute vectors of its neighborhood, and thus update its feature vector. After k aggregation iterations, a node representation is obtained, which fuses the structural and node attribute information within the node’s k -hop neighborhood.…”
Section: Preliminary and Related Workmentioning
confidence: 99%
“…Besides, it also boosts learning to consider multihop neighbors (Zhu et al, 2020;Teru et al, 2020). On the other hand, dynamic high-order neighbor selection (Yang et al, 2021) and dynamic pointer links (Velickovic et al, 2020) illustrate the power of data-driven manipulations. Moreover, differentiable pooling yields consistent and significant performance improvement for end-to-end hierarchical graph representation learning (Ying et al, 2018;Zhang et al, 2019).…”
Section: Related Workmentioning
confidence: 99%