2024
DOI: 10.1109/tbdata.2023.3239116
|View full text |Cite
|
Sign up to set email alerts
|

Improved Gradient Inversion Attacks and Defenses in Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 20 publications
0
22
0
Order By: Relevance
“…Te virtual data are learned using an optimization algorithm in such a way that the gradient obtained by backpropagation on the common model is similar to the real gradient, and the training data and labels are obtained after several rounds of iterative optimization. At the moment, this is one of the hottest topics in the study of variants of DLG-based methods [9,[52][53][54].…”
Section: Gradient Update-based Data Leakagementioning
confidence: 99%
See 1 more Smart Citation
“…Te virtual data are learned using an optimization algorithm in such a way that the gradient obtained by backpropagation on the common model is similar to the real gradient, and the training data and labels are obtained after several rounds of iterative optimization. At the moment, this is one of the hottest topics in the study of variants of DLG-based methods [9,[52][53][54].…”
Section: Gradient Update-based Data Leakagementioning
confidence: 99%
“…FL builds iterable aggregated models by training distributed models across multiple data sources with local data, only exchanging model parameters or intermediate results models, and thus learning a shared target model. Improved approaches based on FL methods [2][3][4][5][6][7][8][9] have carried out a lot of research work in achieving a balance between data privacy protection and data sharing computation. Currently, more researchers are using cryptographic privacy-preserving methods and diferential privacy-preserving methods to achieve privacy-preserving security for local gradients in the federal learning security problem [10,11].…”
Section: Introductionmentioning
confidence: 99%
“…We choose the result with the best performance of each method in comparison. The batch size is set to 128, which is considered privacy-preserving enough in Federated Learning [21], since using a small batch size would elevate the risk of gradient inverting attacks from malicious clients or an unreliable server.…”
Section: Tackling the Client Dropoutmentioning
confidence: 99%
“…In the application of MR reconstruction using FL, Guo et al [24] tried to address the domain shift issue by iteratively aligning the latent feature of UNet [25] between target and other client sites. However, their cross-site strategy requires the target client to share both the latent feature and the network parameter with other client sites in each communication rounds, which could result in data privacy concerns [26], [27]. Moreover, the cross-site strategy requires communications between local clients with their local data, which may contradict the purpose of FL.…”
Section: Introductionmentioning
confidence: 99%