2019
DOI: 10.48550/arxiv.1905.00067
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing

Abstract: Existing popular methods for semi-supervised learning with Graph Neural Networks (such as the Graph Convolutional Network) provably cannot learn a general class of neighborhood mixing relationships. To address this weakness, we propose a new model, MixHop, that can learn these relationships, including difference operators, by repeatedly mixing feature representations of neighbors at various distances. MixHop requires no additional memory or computational complexity, and outperforms on challenging baselines. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
54
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(54 citation statements)
references
References 9 publications
0
54
0
Order By: Relevance
“…To improve the GNN performance while increasing the model depth, various layer architectures have been proposed. AS-GCN [18], DeepGCN [28], JK-net [50], MixHop [1], Snowball [33], DAGNN [31] and GCNII [34] all include some variants of residue connection, either across multiple layers or within a single layer. In principle, such architectures can also benefit the feature propagation of a deep SHADOW-GNN, since their design does not rely on a specific neighborhood (e.g., L-hop).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…To improve the GNN performance while increasing the model depth, various layer architectures have been proposed. AS-GCN [18], DeepGCN [28], JK-net [50], MixHop [1], Snowball [33], DAGNN [31] and GCNII [34] all include some variants of residue connection, either across multiple layers or within a single layer. In principle, such architectures can also benefit the feature propagation of a deep SHADOW-GNN, since their design does not rely on a specific neighborhood (e.g., L-hop).…”
Section: Related Workmentioning
confidence: 99%
“…To address the expressivity challenge, most remedies focus on neural architecture exploration: [44,12,49,29] propose more expressive aggregation functions when propagating neighbor features. [50,28,18,34,1,33,31] use residue-style design components to construct flexible and dynamic receptive fields. Among them, [50,28,18] use skip-connection across multiple GNN layers, and [34,1,33,31] encourage multi-hop message passing within each single layer.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The decrease of d may first reduce the irrelevant information from long-range context, and thus reducing the pose estimation error. When d takes an extreme value 1 16 , the useful information may also be substantially squeezed, leading to a performance drop. Impact of S and L. As shown in Tab.…”
Section: Ablation Studymentioning
confidence: 99%
“…To tackle this problem, geometric dependencies are incorporated into the network in [7,28,39,46], which significantly improve prediction accuracy. As articulated human body can be naturally modeled as a graph, with the recent development of graph neural networks (GNN) [11,13,1,41,47], various GNN-based methods [49,6,20,2,51] are proposed in the literature for 2D-to-3D pose estimation.…”
Section: Introductionmentioning
confidence: 99%