Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512183
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Graph Contrastive Learning with Information Regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 34 publications
(13 citation statements)
references
References 16 publications
0
13
0
Order By: Relevance
“…Baselines We consider a number of node representation learning baselines including very recent SOTA GCL methods. Baselines trained without labels: DGI (Velickovic et al 2019), GRACE (Zhu et al 2020), MVGRL (Hassani and Khasahmadi 2020), BGRL (Thakoor et al 2021), GCA , COLES (Zhu, Sun, and Koniusz 2021), CCA-SSG (Zhang et al 2021), Ariel (Feng et al 2022a) and SimGRACE (Xia et al 2022). Baselines trained with labels: GCN (Kipf and Welling 2016), GAT (Velickovic et al 2017) and InfoGCL (Xu et al 2021).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Baselines We consider a number of node representation learning baselines including very recent SOTA GCL methods. Baselines trained without labels: DGI (Velickovic et al 2019), GRACE (Zhu et al 2020), MVGRL (Hassani and Khasahmadi 2020), BGRL (Thakoor et al 2021), GCA , COLES (Zhu, Sun, and Koniusz 2021), CCA-SSG (Zhang et al 2021), Ariel (Feng et al 2022a) and SimGRACE (Xia et al 2022). Baselines trained with labels: GCN (Kipf and Welling 2016), GAT (Velickovic et al 2017) and InfoGCL (Xu et al 2021).…”
Section: Methodsmentioning
confidence: 99%
“…We can see that MA-GCL can achieve SOTA performance on 5 out of 6 graph benchmarks, and the relative improvement can go up to 2.7%. Considering that the public splits on Cora, Citeseer and PubMed might not be representative, we also investigate another benchmark setting (Feng et al 2022a) with random splits on these three datasets, and compare with the most competitive baselines. As shown in Table 2, MA-GCL consistently outperforms baseline methods, which demonstrates the effectiveness of MA-GCL.…”
Section: Comparison With Baseline Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…where à = A + I n is the adjacency matrix with selfconnections added and I n is the identity matrix, D is the diagonal degree matrix of à with D[i, i] = j Ã[i, j]. The two-layer GCN is given as f (A, X) = σ( Âσ( ÂXW (1) )W (2) ),…”
Section: Graph Encodermentioning
confidence: 99%
“…where W (1) and W (2) are the weights of the first and second layer respectively, σ(•) is the activation function.…”
Section: Graph Encodermentioning
confidence: 99%