Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/310
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Structural Vulnerability in Graph Convolutional Networks

Abstract: Recent studies have shown that Graph Convolutional Networks (GCNs) are vulnerable to adversarial attacks on the graph structure. Although multiple works have been proposed to improve their robustness against such structural adversarial attacks, the reasons for the success of the attacks remain unclear. In this work, we theoretically and empirically demonstrate that structural adversarial examples can be attributed to the non-robust aggregation scheme (i.e., the weighted mean) of GCNs. Specifically, our analysi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(21 citation statements)
references
References 0 publications
0
21
0
Order By: Relevance
“…In general, most robustness evaluations of GNNs are based on their defensive ability, as GNN algorithms are difficult to interpret. For example, current robust GNN techniques assess their methods by using attack deterioration, which is the decreased accuracy after an attack compared to the accuracy without attack [112], [106], [105], and defence rate, which compares the attack success rate with or without defence strategies [118]. However, the robustness conclusions obtained by these metrics are based on the specific perturbations (e.g., perturbations generated by specific adversarial attacks), meaning that general robustness against other attack algorithms cannot be guaranteed.…”
Section: Future Directions Of Robust Gnnsmentioning
confidence: 99%
See 1 more Smart Citation
“…In general, most robustness evaluations of GNNs are based on their defensive ability, as GNN algorithms are difficult to interpret. For example, current robust GNN techniques assess their methods by using attack deterioration, which is the decreased accuracy after an attack compared to the accuracy without attack [112], [106], [105], and defence rate, which compares the attack success rate with or without defence strategies [118]. However, the robustness conclusions obtained by these metrics are based on the specific perturbations (e.g., perturbations generated by specific adversarial attacks), meaning that general robustness against other attack algorithms cannot be guaranteed.…”
Section: Future Directions Of Robust Gnnsmentioning
confidence: 99%
“…[44], [52], [112], [87], [108], [106], [105], [89], [118], [276] Explainability [95] , [128] [145], [58], [22], [56], [132], [150], [160], [165], [161], [151] Privacy [174], [60], [208], [277], [202], [211], [45], [278] Fairness [64] [235] will directly contribute to the environmental well-being of GNNs. Moreover, introducing efficiency considerations into GNN training [258] will also promote the development of efficient GNNs.…”
Section: Robustnessmentioning
confidence: 99%
“…[127], [33], [61], [75], [40], [20], [140], [34], [112] series of reliability threats. At a high level, we categorize these threats into three aspects, namely, inherent noise, distribution shift, and adversarial attack.…”
Section: Trustworthy Graph Learningmentioning
confidence: 99%
“…to the size of the input graph. In our experiments, we use four benchmark datasets from existing work [2,14], including Cora-ML [15], Cora, Citeseer, and Pubmed [16]. All these datasets are publicly available, with statistics shown in Table 1.…”
Section: Algorithm and Analysismentioning
confidence: 99%
“…The first line of existing defenses apply pre-processing steps to remove suspicious or noisy edges before training the GNNs [4,5]. The second line directly develops new and robust GNNs to defend against attacks [6,27,7,28,14]. For example, Zhu et al [6] use Gaussian distributions as the hidden node representations so that attacking effect can be absorbed; Jin et al [7] jointly learn the graph structure and the graph neural networks.…”
Section: Related Workmentioning
confidence: 99%