2022
DOI: 10.1609/aaai.v36i7.20748
|View full text |Cite
|
Sign up to set email alerts
|

Simple Unsupervised Graph Representation Learning

Abstract: In this paper, we propose a simple unsupervised graph representation learning method to conduct effective and efficient contrastive learning. Specifically, the proposed multiplet loss explores the complementary information between the structural information and neighbor information to enlarge the inter-class variation, as well as adds an upper bound loss to achieve the finite distance between positive embeddings and anchor embeddings for reducing the intra-class variation. As a result, both enlarging inter-cla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 61 publications
(16 citation statements)
references
References 22 publications
1
15
0
Order By: Relevance
“…Binary COSTA (Zhang et al 2022) HFA: adding random noise into node embeddings. Binary SUGRL (Mo et al 2022) Nonsynthetic: 1-hop neighbours and the output of 2 different models. Binary AFGRL (Lee, Lee, and Park 2022) Nonsynthetic: 1-hop neighbours and KNN similar nodes.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Binary COSTA (Zhang et al 2022) HFA: adding random noise into node embeddings. Binary SUGRL (Mo et al 2022) Nonsynthetic: 1-hop neighbours and the output of 2 different models. Binary AFGRL (Lee, Lee, and Park 2022) Nonsynthetic: 1-hop neighbours and KNN similar nodes.…”
Section: Methodsmentioning
confidence: 99%
“…Fine-Grained Contrastive Justification Synthetic-Based Nonsynthetic-Based GRACE (Zhu et al 2020) COSTA (Zhang et al 2022) SUGRL (Mo et al 2022) AFGRL (Lee, Lee, and Park 2022) OURS our proposed GSCL. Current GCL adopts the binary contrastive justification setting, that aims to make similarities between samples in the positive views as larger as possible than samples in the negative views.…”
Section: Binary Contrastive Justificationmentioning
confidence: 99%
See 2 more Smart Citations
“…Most of these models achieve best performance when using two or three convolution layers and show dramatic performance degradation for more layers. A more model SUGRL [28] leverages feature shuffling and graph sampling to explore the complementary information between structural information and neighborhood information to expand interclass variation in unsupervised setting. MixHop [11] is one successful attempt to receive the information of multi-level neighbors with shallow architecture.…”
Section: Preliminary and Related Workmentioning
confidence: 99%