Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security 2021
DOI: 10.1145/3460120.3484796
|View full text |Cite
|
Sign up to set email alerts
|

A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…Thus, rather than utilising the gradients of the victim models directly, some black-box attacks [81] construct surrogate models and use the gradients obtained therefrom. Another study in black-box settings [103] uses adversarial graph search and query responses of the victim models to calculate the sign gradients. These approaches may require additional effort to determine the gradients (e.g., the time cost for the Topology attack using a surrogate model can be six to ten times that of a simple gradient-based approach [104]).…”
Section: )mentioning
confidence: 99%
“…Thus, rather than utilising the gradients of the victim models directly, some black-box attacks [81] construct surrogate models and use the gradients obtained therefrom. Another study in black-box settings [103] uses adversarial graph search and query responses of the victim models to calculate the sign gradients. These approaches may require additional effort to determine the gradients (e.g., the time cost for the Topology attack using a surrogate model can be six to ten times that of a simple gradient-based approach [104]).…”
Section: )mentioning
confidence: 99%
“…Some other works [29,16] focus on attacking community detection and they are heuristic and orthogonal to our work. The work [21] most close to ours designed a black-box attacks to GNNs for graph classification, which is based on gradient-free ZOO [5]. However, it also does not have theoretical guaranteed attack performance.…”
Section: Related Workmentioning
confidence: 99%
“…Existing studies have shown that GNNs are vulnerable to adversarial attacks [8], [11], [16], [31], [35], [36], [38], [50], [55], [56], [65], which deceive GNN models to make wrong predictions for graph classification or node classification. Depending on the stages when these attacks occur, these adversarial attacks can be classified into training-time poisoning attacks [34], [50], [66] and testing time adversarial attacks [9], [10], [35].…”
Section: Adversarial Attacks On Gnnsmentioning
confidence: 99%
“…Depending on the stages when these attacks occur, these adversarial attacks can be classified into training-time poisoning attacks [34], [50], [66] and testing time adversarial attacks [9], [10], [35]. Based on the attacker's knowledge, adversarial attacks can also be categorized into white-box attacks [56], [65] and black-box attacks [8], [35], [38].…”
Section: Adversarial Attacks On Gnnsmentioning
confidence: 99%