2020
DOI: 10.48550/arxiv.2010.01777
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Unified View on Graph Neural Networks as Graph Signal Denoising

Abstract: Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data. A single GNN layer typically consists of a feature transformation and a feature aggregation operation. The former normally uses feed-forward networks to transform features, while the latter aggregates the transformed features over the graph. Numerous recent works have proposed GNN models with different designs in the aggregation operation. In this work, we establish mathematically that the aggregation p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 18 publications
0
17
0
Order By: Relevance
“…The aggregation process can be usually understood as feature smoothing [20,21,16,42]. Hence, several recent works claim [41,40,4], assume [12,35,38] or remark upon [1,22,14] GNN models homophily-reliance or unsuitability in capturing heterophily.…”
Section: Related Workmentioning
confidence: 99%
“…The aggregation process can be usually understood as feature smoothing [20,21,16,42]. Hence, several recent works claim [41,40,4], assume [12,35,38] or remark upon [1,22,14] GNN models homophily-reliance or unsuitability in capturing heterophily.…”
Section: Related Workmentioning
confidence: 99%
“…Recently there has been a surge of interest in GNN architectures with layers defined with respect to the minimization of a principled class of graph-regularized energy functions (Klicpera et al, 2018;Ma et al, 2020;Pan et al, 2021;Yang et al, 2021;Zhang et al, 2020;Zhu et al, 2021). In this context, the basic idea is to associate each descent step along an optimization trajectory (e.g., a gradient descent step, power iteration, or related), with a GNN layer, such that in aggregate, the forward GNN pass can be viewed as minimization of the original energy.…”
Section: Graph-aware Propagation Layers Inspired By Gradient Descentmentioning
confidence: 99%
“…Additionally, f (k) (X; θ) i is the i-th row of f (k) (X; θ), m < n is the number of labeled nodes and D is some discriminator function, e.g., cross-entropy for classification, squared error for regression. We may then optimize In prior work (Klicpera et al, 2018;Ma et al, 2020;Pan et al, 2021;Yang et al, 2021;Zhang et al, 2020;Zhu et al, 2021), this type of bilevel optimization framework has been adopted to either unify and explain existing GNN models, or motivate alternatives by varying the structure of P (k) . However, in all cases to date that we are aware of, it has been assumed that f (X; θ) is differentiable, typically either a linear function or an MLP.…”
Section: From Graph-aware Propagation To Bilevel Optimizationmentioning
confidence: 99%
“…A comprehensive overview about GNNs can be found in recent surveys [50,59] and books [1]. In addition, there emerge some work trying to further explore the rationale behind GNNs, such as Ma et al [31] proposes that most existent GNNs can be unified as graph signal denoising. In addition to the aforementioned work mainly focusing on graph convolution operation, there are also numerous works targeting at graph pooling operation, which summarizes graph representation from node representations and plays an essential role in graph representation learning.…”
Section: Graph Neural Networkmentioning
confidence: 99%