Proceedings of the 15th ACM Asia Conference on Computer and Communications Security 2020
DOI: 10.1145/3320269.3384750
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks on Link Prediction Algorithms Based on Graph Neural Networks

Abstract: Link prediction is one of the fundamental problems for graphstructured data. However, a number of applications of link prediction, such as predicting commercial ties or memberships within a criminal organization, are adversarial, with another party aiming to minimize its effectiveness by manipulating observed information about the graph. In this paper, we focus on the feasibility of mounting adversarial attacks against link prediction algorithms based on graph neural networks. We first propose a greedy heurist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(14 citation statements)
references
References 25 publications
0
14
0
Order By: Relevance
“…However, it would also be interesting to compare ALPINE with CNE against other types of link prediction methods to gain more insights. For example, a comparison of our work with a state-of-the-art link prediction approach (e.g., SEAL [45] according to [46,47]) could be used to show whether the differentiation between the unknown and the unlinked status together with active learning would improve the link prediction accuracy in general. Note that this type of comparison can be biased as we had three types of link statuses, while other link prediction methods usually have only two.…”
Section: Discussionmentioning
confidence: 99%
“…However, it would also be interesting to compare ALPINE with CNE against other types of link prediction methods to gain more insights. For example, a comparison of our work with a state-of-the-art link prediction approach (e.g., SEAL [45] according to [46,47]) could be used to show whether the differentiation between the unknown and the unlinked status together with active learning would improve the link prediction accuracy in general. Note that this type of comparison can be biased as we had three types of link statuses, while other link prediction methods usually have only two.…”
Section: Discussionmentioning
confidence: 99%
“…To analyze the robustness of GNNs, various attack methods under different settings have been proposed to determine the vulnerabilities of graph embedding methods and help develop the corresponding defense methods. The attacks on the graph focus more on the main tasks such as classification [4,13,28], community detection [2,9], and link prediction [6,11]. Graph neural networks have been proven to be vulnerable and susceptible to attacks by Zinger et al [32] in 2018, who propose the earliest graph adversarial attack method Nettack for targeted node classification tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Existing studies have shown that GNNs are vulnerable to adversarial attacks [5,6,33,46,48,61], which deceive a GNN to produce wrong labels for specific target graphs (in graph classification tasks) or target nodes (in node classification tasks). According to the stages when these attacks occur, they can be classified into training-time poisoning attacks [30,46,53,63,64] and testing time adversarial attacks [7,9,27,32,43,48]. In this paper, we focus on testing time adversarial attacks against classification attacks.…”
Section: Related Workmentioning
confidence: 99%
“…While GNNs significantly boost the performance of graph data processing, existing studies show that GNNs are vulnerable to adversarial attacks [13,27,41,43,43,52,63]. However, almost all the existing attacks focus on attacking GNNs for node classification, leaving attacks against GNNs for graph classification largely unexplored, though graph classification has been widely applied [1,31,43,49].…”
Section: Introductionmentioning
confidence: 99%