2023
DOI: 10.1109/tcss.2022.3161016
|View full text |Cite
|
Sign up to set email alerts
|

Graph-Fraudster: Adversarial Attacks on Graph Neural Network-Based Vertical Federated Learning

Abstract: Graph neural network (GNN) has captured wide attention due to its capability of graph representation learning for graph-structured data. However, the distributed data silos limit the performance of GNN. Vertical federated learning (VFL), an emerging technique to process distributed data, successfully makes GNN possible to handle the distributed graph-structured data. Despite the prosperous development of vertical federated graph learning (VFGL), the robustness of VFGL against the adversarial attack has not bee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…FedVGCN [97] transfers homomorphic encrypted intermediate results between two clients to complete node feature space with the semihonest server creating encryption key pairs and doing FL aggregation. Graph-Fraudster [98] studies the adversarial attacks on the local raw data and node embedding. It proves that the DP mechanism and top-k mechanism are two possible defenses to the attacks.…”
Section: B Vertical Fedgnnsmentioning
confidence: 99%
“…FedVGCN [97] transfers homomorphic encrypted intermediate results between two clients to complete node feature space with the semihonest server creating encryption key pairs and doing FL aggregation. Graph-Fraudster [98] studies the adversarial attacks on the local raw data and node embedding. It proves that the DP mechanism and top-k mechanism are two possible defenses to the attacks.…”
Section: B Vertical Fedgnnsmentioning
confidence: 99%
“…Security attacks demonstrated against VFL include backdoor attacks introduced during the training phase to compromise the integrity of the model [59], and adversarial example attacks occurred solely during the inference phase [60]. The active participant and passive participants have different levels of threats to the VFL due to their differing knowledge and control capabilities over the VFL.…”
Section: A Security Threatsmentioning
confidence: 99%
“…Chen et al proposed an AE attack against graph VFL in the inference phase, namely graph fraudster [60]. Firstly, a malicious participant is randomly selected from all passive participants.…”
Section: A Security Threatsmentioning
confidence: 99%
“…FedGNN present a distributed machine learning paradigm that facilitates collaborative training of GNNs among multiple parties, ensuring the privacy of their sensitive data. In recent years, extensive research has been conducted on FedGNN, with a particular focus on addressing security concerns [15,16,17,18]. Among these concerns, poisoning attacks have garnered significant attention, encompassing both data poisoning attacks and model poisoning attacks.…”
Section: Related Workmentioning
confidence: 99%
“…Currently, the majority of attacks on FedGNN primarily concentrate on data poisoning attacks. Chen et al [15] proposed adversarial attacks on vertical federated learning, utilizing adversarial perturbations on global node embeddings based on gradient leakage from pairwise nodes. Additionally, Xu et al [16] investigated centralized and distributed backdoor attacks on FedGNN.…”
Section: Related Workmentioning
confidence: 99%