2021
DOI: 10.48550/arxiv.2110.07875
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Graph Neural Networks with Learnable Structural and Positional Representations

Abstract: Graph neural networks (GNNs) have become the standard learning architectures for graphs. GNNs have been applied to numerous domains ranging from quantum chemistry, recommender systems to knowledge graphs and natural language processing. A major issue with arbitrary graphs is the absence of canonical positional information of nodes, which decreases the representation power of GNNs to distinguish e.g. isomorphic nodes and other graph symmetries. An approach to tackle this issue is to introduce Positional Encodin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(21 citation statements)
references
References 29 publications
(76 reference statements)
0
21
0
Order By: Relevance
“…In Transformers [27], positional encodings are concatenated with the word embeddings as the input to the learning process without being involved in the layer-wise update, referred to as non-learnable positional encodings (NLPE). Similar to [30], in our work, we iteratively update positional encodings using a dedicated GNN such that positional encodings can be adjusted to the graph structure at hand. The result using learnable position encodings is denoted as PE.…”
Section: B Quantitative Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In Transformers [27], positional encodings are concatenated with the word embeddings as the input to the learning process without being involved in the layer-wise update, referred to as non-learnable positional encodings (NLPE). Similar to [30], in our work, we iteratively update positional encodings using a dedicated GNN such that positional encodings can be adjusted to the graph structure at hand. The result using learnable position encodings is denoted as PE.…”
Section: B Quantitative Resultsmentioning
confidence: 99%
“…Assuming the CNN predictions are mostly accurate (results in Table II), the correctly-predicted branches can be used to provide canonical positional encodings because their locations are consistently defined (up, down, left, and right) according to the airway tree anatomy. Many existing positional encoding methods, i.e., Laplacian eigenvectors [29], randomwalks encodings [30], and encodings using random anchors [16] provide non-canonical positional encodings because these methods operate on arbitrary graphs. By adding leaf branches, the anchors distribute evenly in terms of the depth of the tree, preventing selecting anchors from only upper side of the tree.…”
Section: A Airway Labeling Frameworkmentioning
confidence: 99%
“…GNNs typically employ a neighborhood aggregation strategy [25], [26], where the representations of nodes in the graph are iteratively updated by aggregating representations of their neighbors. Ultimately, a node's high-level representation captures structural attributes within its L-hop network neighborhood.…”
Section: B Graph Neural Networkmentioning
confidence: 99%
“…It plays a crucial role in improving model effectiveness. Studies [25], [47] have shown that PE representing the structural attributes of graphs is also essential for prediction tasks on graphs. However, finding such positional encodings on graphs for nodes is challenging due to the invariance of graphs to node permutation.…”
Section: A Graph Positional Encodingmentioning
confidence: 99%
See 1 more Smart Citation