2022
DOI: 10.1609/aaai.v36i8.20875
|View full text |Cite
|
Sign up to set email alerts
|

SAIL: Self-Augmented Graph Contrastive Learning

Abstract: This paper studies learning node representations with graph neural networks (GNNs) for unsupervised scenario. Specifically, we derive a theoretical analysis and provide an empirical demonstration about the non-steady performance of GNNs over different graph datasets, when the supervision signals are not appropriately defined. The performance of GNNs depends on both the node feature smoothness and the locality of graph structure. To smooth the discrepancy of node proximity measured by graph topology and node fe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…This way, it alleviates the impact of false-negative samples. SAIL [29] utilized self-distillation to maintain distribution consistency between low-layer node embeddings and high-layer node features and alleviate the problem of smoothness. The idea behind AFGRL [30] is that augmentation on graphs is difficult to design.…”
Section: Deep Contrastive Graph Clusteringmentioning
confidence: 99%
“…This way, it alleviates the impact of false-negative samples. SAIL [29] utilized self-distillation to maintain distribution consistency between low-layer node embeddings and high-layer node features and alleviate the problem of smoothness. The idea behind AFGRL [30] is that augmentation on graphs is difficult to design.…”
Section: Deep Contrastive Graph Clusteringmentioning
confidence: 99%
“…The differences between each SKD method are detailed in TABLE 4. For instance, SAIL [78], GNN-SD [79], and SDSS [80] use KL divergence for output layer, middle layer, and constructed graph knowledge, respectively. LinkDist [76] applies GKD using MSE distance for node classification and model compression.…”
Section: Self-knowledge Distillation Based Graph-based Knowledge Dist...mentioning
confidence: 99%
“…Therefore, researchers try to explore the rich information contained in the convolutional layer of the GNNs, hoping to mine the knowledge with more ability to express node features. The representative methods include GNN-SD [79] and SAIL [78].…”
Section: Middle Layer Knowledgementioning
confidence: 99%
See 1 more Smart Citation
“…One needs to ensure that paired features are similar both in modeling capacity and relevance to the output. Most research on feature-based distillation on graphs has so far focused on models that only have one type of (scalar) features in single-output classification tasks [26,27,28,29], thereby reducing the problem to the selection of layers to pair across the student and the teacher. This is often further simplified by utilizing models of the same architecture.…”
Section: Knowledge Distillation In Molecular Gnnsmentioning
confidence: 99%