2021
DOI: 10.1145/3447556.3447566
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks and Defenses on Graphs

Abstract: Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
65
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 117 publications
(69 citation statements)
references
References 38 publications
1
65
0
Order By: Relevance
“…There has been increasing research interest in adversarial attacks on GNNs recently. Detailed expositions of existing literature are made available in a couple of survey papers [12,23]. Given the heterogeneous nature of diverse graph structured data, there are numerous adversarial attack setups for GNN models.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There has been increasing research interest in adversarial attacks on GNNs recently. Detailed expositions of existing literature are made available in a couple of survey papers [12,23]. Given the heterogeneous nature of diverse graph structured data, there are numerous adversarial attack setups for GNN models.…”
Section: Related Workmentioning
confidence: 99%
“…Given the heterogeneous nature of diverse graph structured data, there are numerous adversarial attack setups for GNN models. Following the taxonomy provided by Jin et al [12], the adversarial attack setup can be categorized based on (but not limited to) the machine learning task, the goal of the attack, the phase of the attack, the form of the attack, and the model knowledge that attacker has access to. First, there are two common types of tasks, node-level classification [5,6,27,34] and graph-level classification [5,25].…”
Section: Related Workmentioning
confidence: 99%
“…In this work, we assume that the parameters of the model are fixed before and after perturbation. In the adversarial learning literature, an attack that modifies the input to cause large changes in the output whilst the model parameters are fixed is known as an evasion attack [11]. Robust models in the context of our work are those that are robust to evasion attacks with respect to the graph structure.…”
Section: Stability Of Polynomial Filtersmentioning
confidence: 99%
“…Like their Euclidean counterparts, GCNNs are susceptible to adversarial attacks [9,10,11]. An adversarial attack is a small but targeted perturbation of the input which causes large changes in the output [12].…”
Section: Introductionmentioning
confidence: 99%
“…(1) Neural-network encoders f comp : X comp → R Ncomp×D , f prot : X prot → R Nprot×D that separately extract embeddings H comp , H prot for the compound X comp and protein X prot where D is hidden dimension. Graph neural network (GNN, [14,15,16,17,18,19]) is adopted for compound 2D chemical graphs and hierarchical recurrent neural network (HRNN, [20]) is chosen for protein 1D amino-acid sequences.…”
Section: Pipeline Overviewmentioning
confidence: 99%