2020
DOI: 10.1109/tcss.2020.3004059
|View full text |Cite
|
Sign up to set email alerts
|

Link Prediction Adversarial Attack Via Iterative Gradient Attack

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 46 publications
(17 citation statements)
references
References 48 publications
1
15
0
Order By: Relevance
“…Interestingly, our observations in the case studies agree only partially with a heuristic community detection attack strategy called DICE [42], which has been used as a baseline for attacking link prediction in [4]. Inspired by modularity, DICE randomly disconnect internally and connect externally [42], of which the goal is to hide the a group of nodes from being detected as a community.…”
Section: Case Studiessupporting
confidence: 58%
See 2 more Smart Citations
“…Interestingly, our observations in the case studies agree only partially with a heuristic community detection attack strategy called DICE [42], which has been used as a baseline for attacking link prediction in [4]. Inspired by modularity, DICE randomly disconnect internally and connect externally [42], of which the goal is to hide the a group of nodes from being detected as a community.…”
Section: Case Studiessupporting
confidence: 58%
“…The robustness for NE based link prediction is much less investigated than classification, and is considered more often as a way to evaluate the robustness of the NE method, such as in [34,2,38]. To the best of our knowledge, there are only two works on adversarial attacks for link prediction based on NE: one targeting the GNN-based SEAL [51] with structural perturbations and one targeting GCN with iterative gradient attack [4].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Now we present our sign stochastic gradient descent (signSGD) algorithm to solve the attack optimization problem in Eq (7). Before presenting signSGD, we first describe the method to compute š‘ (Ī˜), where only hard label is returned when querying the GNN model; and then introduce a query-efficient gradient computation algorithm to compute the gradients of š‘ (Ī˜).…”
Section: Generating Adversarial Graphs Via Signsgdmentioning
confidence: 99%
“…Existing studies have shown that GNNs are vulnerable to adversarial attacks [5,6,33,46,48,61], which deceive a GNN to produce wrong labels for specific target graphs (in graph classification tasks) or target nodes (in node classification tasks). According to the stages when these attacks occur, they can be classified into training-time poisoning attacks [30,46,53,63,64] and testing time adversarial attacks [7,9,27,32,43,48]. In this paper, we focus on testing time adversarial attacks against classification attacks.…”
Section: Related Workmentioning
confidence: 99%