2021
DOI: 10.48550/arxiv.2112.02472
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Augmentation-Free Self-Supervised Learning on Graphs

Abstract: Inspired by the recent success of self-supervised methods applied on images, self-supervised learning on graph structured data has seen rapid growth especially centered on augmentation-based contrastive methods. However, we argue that without carefully designed augmentation techniques, augmentations on graphs may behave arbitrarily in that the underlying semantics of graphs can drastically change. As a consequence, the performance of existing augmentationbased methods is highly dependent on the choice of augme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…2) Graph diffusion could achieve comparable performance on CORA and AMAP datasets while can not compare with ours on other datasets. It indicates that graph diffusion might change the underlying semantics of graphs [17]. Overall, expensive experiments have demonstrated the effectiveness of our proposed data augmentation method.…”
Section: Ablation Studiesmentioning
confidence: 72%
See 1 more Smart Citation
“…2) Graph diffusion could achieve comparable performance on CORA and AMAP datasets while can not compare with ours on other datasets. It indicates that graph diffusion might change the underlying semantics of graphs [17]. Overall, expensive experiments have demonstrated the effectiveness of our proposed data augmentation method.…”
Section: Ablation Studiesmentioning
confidence: 72%
“…In summary, we construct two augmented views Z 𝑣 1 and Z 𝑣 2 by designing parameter un-shared encoders and corrupting the node embeddings directly instead of introducing complex operations against graphs, thus improving the training efficiency (See section 4.4). Besides, recent works [17,28,32] have indicated that the complex data augmentations over graphs, like edge adding, edge dropping, and graph diffusion could lead to semantic drift. The similar conclusion is verified through experiments in Section 4.5.2.…”
Section: Structural Contrastive Modulementioning
confidence: 99%
“…However, due to the natural complexity of graphs, the mentioned SSL-based model highly depends on the data augmentation scheme [11], and [18] has shown that there is no universally outperforming data augmentation scheme for graphs. AFGRL [11] first proposes to generate positive pairs based on the local structural information and the global semantics of graphs instead of data augmentation, where the structure of the original graph will not be destroyed, thus preserving the semantics. Simultaneously, SimGCL [9] discards the augmentation and creates contrastive positives by adding uniform noises to the embedding space.…”
Section: Augmentations On Graphsmentioning
confidence: 99%
“…However, the Unfiltered i A contains a lot of noise nodes, namely "false" positives that are semantically contrary to the target node and will lead to poor performance when regarded as positives. Therefore, to filter out "false" positives in the neighboring nodes, following [11], we then compute the cosine similarity between all other nodes in the graph as follows:…”
Section: Mixing Layermentioning
confidence: 99%
See 1 more Smart Citation