2021
DOI: 10.48550/arxiv.2106.08541
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages

Abstract: Nowadays, Graph Neural Networks (GNNs) following the Message Passing paradigm become the dominant way to learn on graphic data. Models in this paradigm have to spend extra space to look up adjacent nodes with adjacency matrices and extra time to aggregate multiple messages from adjacent nodes. To address this issue, we develop a method called LinkDist that distils self-knowledge from connected node pairs into a Multi-Layer Perceptron (MLP) without the need to aggregate messages. Experiment with 8 real-world da… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 12 publications
0
9
0
Order By: Relevance
“…PubMed Reddit Ogbn-products GCN [15] 87.70% 93.3%* 75.64%* GAT [24] 86.80% − − LinkDist [17] 89.58%* − − GraphSAINT [29] 88.50% 96.6%* 79.08%* CAGNET [23] 87.60% 93.73% 75.36% CES-GCN [13] N/A 92.4%* N/A BDS-GCN [25] N/A 97.07%* 79.44%* PPSGCN 90.10% 96.05% 79.15%…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…PubMed Reddit Ogbn-products GCN [15] 87.70% 93.3%* 75.64%* GAT [24] 86.80% − − LinkDist [17] 89.58%* − − GraphSAINT [29] 88.50% 96.6%* 79.08%* CAGNET [23] 87.60% 93.73% 75.36% CES-GCN [13] N/A 92.4%* N/A BDS-GCN [25] N/A 97.07%* 79.44%* PPSGCN 90.10% 96.05% 79.15%…”
Section: Methodsmentioning
confidence: 99%
“…• GAT [24], which introduces the attention mechanism to graph neural networks. • LinkDist [17], which substitutes message-passing neighbour aggregation with MLP in graph neural networks. • GraphSAINT [29], which proposes an inductive GCN training method with subgraph sampling and normalization.…”
Section: Experimental Set-upmentioning
confidence: 99%
“…45 LinkDist series (LinkDistMLP, CoLinkDist, and LinkDist) extract useful features by distilling self-knowledge from associated couple nodes. 46 3ference analyzes the transition patterns of node labels on the graph. 47 The performances of these SOTA approaches are improved.…”
Section: Comparison With More Sota Approachesmentioning
confidence: 99%
“…For example, Graph-MLP (Hu et al, 2021) designs a neighborhood contrastive loss to bridge the gap between GNNs and MLPs by implicitly utilizing the adjacency information. Instead, LinkDist (Luo et al, 2021) directly distills knowledge from connected node pairs into MLPs without message passing. Despite their great progress, these methods still cannot match the state-of-the-art GNNs in terms of classification performance due to the lack of modeling the graph topology.…”
Section: Related Workmentioning
confidence: 99%
“…As illustrated in Fig. 1(c), the MLP-based models, such as Graph-MLP (Hu et al, 2021) and LinkDist (Luo et al, 2021), are faster in inference but with much poorer performance compared to GNNs. There are two main branches of existing approaches to connect these two worlds.…”
Section: Introductionmentioning
confidence: 99%