2022
DOI: 10.1108/ijwis-04-2022-0078
|View full text |Cite
|
Sign up to set email alerts
|

Inference attacks based on GAN in federated learning

Abstract: Purpose In the digital age, organizations want to build a more powerful machine learning model that can serve the increasing needs of people. However, enhancing privacy and data security is one of the challenges for machine learning models, especially in federated learning. Parties want to collaborate with each other to build a better model, but they do not want to reveal their own data. This study aims to introduce threats and defenses to privacy leaks in the collaborative learning model. Design/methodology… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…Insufficient knowledge exists in the parameter server to properly train a collaborative machine learning model. A previous study Ha et al (2022) assumed that the participant was the one who was subject to the privacy leakage attack, and compared the success rates of inference attacks from model parameters using GANs models.…”
Section: Inference Attacksmentioning
confidence: 99%
“…Insufficient knowledge exists in the parameter server to properly train a collaborative machine learning model. A previous study Ha et al (2022) assumed that the participant was the one who was subject to the privacy leakage attack, and compared the success rates of inference attacks from model parameters using GANs models.…”
Section: Inference Attacksmentioning
confidence: 99%
“…Melis et al [16] used user-updated model parameters as features for training attack model inputs for inferring relevant attributes of other user datasets. Te literature [7,50,51] employs generative adversarial networks to generate methods for recovering training data from other users, and Mahendran et al [22] investigate gradient inversion information maximization to synthesize real data from training networks, but both rely on a priori information from auxiliary datasets. Mordvintsev et al [23] use only the gradients in the input to enable the separation of noise and image, making it difcult to obtain higher-fdelity information on large datasets.…”
Section: Gradient Update-based Data Leakagementioning
confidence: 99%
“…We can still prove theoretically that the recovery of data labels is independent of the iterative training process of the network and refer to the proof of (equation ( 3)) for details. Tis 􏼈 􏼉 ≤ 0//Get the subscript c ′ of the last layer of negative bias (2) c pred ← One_Hot (c ′ )//Convert c ′ to One_Hot code (3) x ′ ←N(0, 1)//Initialize virtual data with the same dimensions as x (4) for i←1 to N do (5) ∇W ′ ←zLoss(F(x ′ , W), c pred )/zW//Calculating virtual gradients (6) Loss W d ←WDCA(∇W ′ , ∇W)//WDCA is Algorithm 2 (7) x ′ ←x ′ − η∇ x ′ Loss W d (8) end for Output: x ′ ALGORITHM 1: WDLG Algorithm.…”
Section: Comparison Of the Accuracy Of Predicted Labelsmentioning
confidence: 99%
See 1 more Smart Citation
“…According to content and scenarios, the data involved in AI applications can be divided into three categories: original data generated by users and identity data, data reflecting the appearance of the users’ behavior collected through users’ daily life behaviors, network records and App records and characteristic index data obtained from algorithms. These data bring immeasurable business value to enterprises and efficient and convenient services to people, but they may have the potential to compromise sensitive and private data in the process of flow (Guo et al , 2021; Ha and Dang, 2022). In this regard, the academic community has conducted several targeted studies.…”
Section: Introductionmentioning
confidence: 99%