2019
DOI: 10.48550/arxiv.1906.04214
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
91
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 47 publications
(91 citation statements)
references
References 11 publications
0
91
0
Order By: Relevance
“…Some works try to model the attack as an optimization problem. PGD and MinMax [27] optimize the negative crossentropy loss using the gradient. Nettack [40] iteratively selects augmentations by calculating the score/loss of each possible augmentation and chooses the one that maximizes the loss.…”
Section: Related Work 21 Graph Adversarial Attackmentioning
confidence: 99%
See 2 more Smart Citations
“…Some works try to model the attack as an optimization problem. PGD and MinMax [27] optimize the negative crossentropy loss using the gradient. Nettack [40] iteratively selects augmentations by calculating the score/loss of each possible augmentation and chooses the one that maximizes the loss.…”
Section: Related Work 21 Graph Adversarial Attackmentioning
confidence: 99%
“…We compare CLGA with five baseline untargeted poisoning attacks, including PGD [27], DICE [26], MinMax [27], Metattack [41], and the unsupervised node embedding attack proposed by Bojchevski et al [2]. Among these baselines, only Bojchevski et al [2] is unsupervised and does not need labels, which is the same as our method.…”
Section: Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…In the field of graph adversarial learning, researchers have proposed various settings for attacks. According to the information available to the attackers, attacks can be divided into white-box (attackers can obtain all the information of the victim model) [21,25,28], gray-box (the parameters of the victim and the test labels are invisible) [12,32,33], and black-box attacks (labels are invisible but attackers can do black-box queries to the prediction) [4,14], respectively. At which phase, i.e., model training and model testing, an adversarial attack happens, determines whether the attack is a poisoning attack or an evasion attack.…”
Section: Introductionmentioning
confidence: 99%
“…Among the proposed gray-box untargeted attack methods, the gradient-based attackers [12,13,28,33] have been proved to have enhanced attacking performance and become one of the mainstream attack strategies. Gradient-based attackers rely on a pre-trained GNN classifier, known as a surrogate model, to obtain gradients at node features and graph structure, based on which to generate perturbations.…”
Section: Introductionmentioning
confidence: 99%