2022
DOI: 10.48550/arxiv.2203.00199
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks

Abstract: Graph neural networks (GNN) have shown great advantages in many graph-based learning tasks but often fail to predict accurately for a task based on sets of nodes such as link/motif prediction and so on. Many works have recently proposed to address this problem by using random node features or node distance features. However, they suffer from either slow convergence, inaccurate prediction or high complexity. In this work, we revisit GNNs that allow using positional features of nodes given by positional encoding… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…Equipped with the positional encoding, the graph transformers can go beyond 1-WL [95,96] and further proved to be universal [96]. However, such positional encodings are not permutationequivariant thus recent works attempt to design equivariant Laplacian positional encoding [144,145].…”
Section: Bevilacqua Et Al [100]mentioning
confidence: 99%
“…Equipped with the positional encoding, the graph transformers can go beyond 1-WL [95,96] and further proved to be universal [96]. However, such positional encodings are not permutationequivariant thus recent works attempt to design equivariant Laplacian positional encoding [144,145].…”
Section: Bevilacqua Et Al [100]mentioning
confidence: 99%
“…To this end, the generic message passing scheme has been shown to be universal over Turing computable functions [1], although this does not cover the case of non-computable functions or density over certain sets of non-computable functions. More recent breakthroughs have shown that the expressive power is not necessarily a property of architectures alone, because enriching the node feature space with positional encodings [21,21,22], like random node representations, can make MPNNs more powerful [23]. A result that we use in later experiments is that MPNNs enriched with non-trainable node representations can express any non-attributed graph functions while retaining equivariance in probability [24].…”
Section: Universal Approximation Of Agfsmentioning
confidence: 99%

Universal Local Attractors on Graphs

Krasanakis,
Papadopoulos,
Kompatsiaris
2024
Preprint
“…To this end, the generic message-passing scheme has been shown to be universal over Turing computable functions [1], although this does not cover the case of non-computable functions or density over certain sets of non-computable functions. More recent breakthroughs have shown that the expressive power is not necessarily a property of architectures alone, because enriching the node feature space with positional encodings [21,22], like random node representations, can make MPNNs more expressive [23]. A result that we use in later experiments is that MPNNs enriched with non-trainable node representations can express any non-attributed graph functions while retaining equivariance in probability [24].…”
Section: Universal Approximation Of Agfsmentioning
confidence: 99%