2023
DOI: 10.1109/tkde.2021.3131584
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Learning on Graphs: Contrastive, Generative, or Predictive

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
67
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 139 publications
(67 citation statements)
references
References 40 publications
0
67
0
Order By: Relevance
“…Considering the advantages of contrastive learning (see Section 3.2), GNN-based contrastive learning approaches are suitable for extracting the embedding vectors of graphs of SVG elements in our structure-aware approach. Existing GNN-based contrastive learning approaches are mainly applied to three types of tasks [53], including node-level tasks such as node classification (e.g., DGI [45]), edge-level tasks such as link prediction (e.g., BiGi [3]) and graph-level tasks such as graph classification (e.g., InfoGraph [42]). Since we aim to represent the structural information in the graph of SVG elements with embedding vectors, InfoGraph [42], one of the state-of-the-art methods for graph embedding, is applied in our approach.…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…Considering the advantages of contrastive learning (see Section 3.2), GNN-based contrastive learning approaches are suitable for extracting the embedding vectors of graphs of SVG elements in our structure-aware approach. Existing GNN-based contrastive learning approaches are mainly applied to three types of tasks [53], including node-level tasks such as node classification (e.g., DGI [45]), edge-level tasks such as link prediction (e.g., BiGi [3]) and graph-level tasks such as graph classification (e.g., InfoGraph [42]). Since we aim to represent the structural information in the graph of SVG elements with embedding vectors, InfoGraph [42], one of the state-of-the-art methods for graph embedding, is applied in our approach.…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…For example, GRACE [17] contrasts node-node pairs, GraphCL [16] considers graph-graph pairs, while DGI [13], InfoGraph [19], and MVGRL [14] constructs graph-node contrasting pairs. Although there has been several survey papers on self-supervised graph representation learning [20][21][22], to the best of our knowledge, none of existing work provides rigorous empirical evidence on the impact of each component in GCL. In this work, we proposes a rather complete dissection of existing work and provides empirical insights into building an effective GCL algorithm.…”
Section: Background and Related Workmentioning
confidence: 99%
“…We note that although there has been several survey papers on self-supervised graph representation learning [20][21][22], to the best of our knowledge, none of existing work provides rigorous empirical evidence on the impact of each component in GCL.…”
Section: Introductionmentioning
confidence: 99%
“…GNNs models couple with contrastive learning to learn graph or node level representations without relying on supervisory data [31]. Then the trained model can transfer the learned representations to a priori unknown downstream tasks.…”
Section: B Contrastive Learningmentioning
confidence: 99%