2021
DOI: 10.48550/arxiv.2110.14855
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks

Abstract: Despite the recent advances of graph neural networks (GNNs) in modeling graph data, the training of GNNs on large datasets is notoriously hard due to the overfitting. Adversarial training, which augments data with the worst-case adversarial examples, has been widely demonstrated to improve model's robustness against adversarial attacks and generalization ability. However, while the previous adversarial training generally focuses on protecting GNNs from spiteful attacks, it remains unclear how the adversarial t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…These existing spatial clustering methods often encounter the problem of learning degenerate identity mappings, where the latent embedding space does not have meaningful structure. In addition, GNN can be prone to overfitting [ 21 ], which hinders the accurate identification of spatial domain boundaries.…”
Section: Introductionmentioning
confidence: 99%
“…These existing spatial clustering methods often encounter the problem of learning degenerate identity mappings, where the latent embedding space does not have meaningful structure. In addition, GNN can be prone to overfitting [ 21 ], which hinders the accurate identification of spatial domain boundaries.…”
Section: Introductionmentioning
confidence: 99%
“…However, GNNs may cause over-fitting, reducing their generalization ability [17]. Furthermore, node representations tend to become indistinguishable due to the message-passing mechanism aggregating information from neighbors which called over-smoothing.…”
Section: Introductionmentioning
confidence: 99%