2021
DOI: 10.48550/arxiv.2106.03328
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning

Abstract: Secure aggregation is a critical component in federated learning, which enables the server to learn the aggregate model of the users without observing their local models. Conventionally, secure aggregation algorithms focus only on ensuring the privacy of individual users in a single training round. We contend that such designs can lead to significant privacy leakages over multiple training rounds, due to partial user selection/participation at each round of federated learning. In fact, we show that the convent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(26 citation statements)
references
References 24 publications
0
26
0
Order By: Relevance
“…For example, FedBuff [90] and LightSecAgg [91] allow asynchronous aggregation of which the security can be enhanced by integrating existing PPAgg protocols. In a recent work [92], the authors point out that even with the aforementioned privacy-preserving aggregation protocols, the multiple-round FL training may lead to severe information leakages due to the dynamic user participation. As shown in Figure 3, user u 1 , u 2 , u 3 participate in round t, and user u 1 , u 2 participate in round t + 1.…”
Section: Masking-based Aggregationmentioning
confidence: 99%
“…For example, FedBuff [90] and LightSecAgg [91] allow asynchronous aggregation of which the security can be enhanced by integrating existing PPAgg protocols. In a recent work [92], the authors point out that even with the aforementioned privacy-preserving aggregation protocols, the multiple-round FL training may lead to severe information leakages due to the dynamic user participation. As shown in Figure 3, user u 1 , u 2 , u 3 participate in round t, and user u 1 , u 2 participate in round t + 1.…”
Section: Masking-based Aggregationmentioning
confidence: 99%
“…There are protection schemes such as cryptography solutions [33], [34] and the use of trusted execution environments [35], [36] for secure aggregation. However, cryptography solutions have a significant performance overhead and they are not scalable to systems with many edge devices.…”
Section: Mitigation Possibilitiesmentioning
confidence: 99%
“…While the privacy of the users is protected in each single round, the server can reconstruct an individual model from the aggregated models over multiple rounds of aggregation. Specifically, as a result of the client sampling strategy and the users dropouts, the server may be able to recover an individual model by exploiting the history of the aggregate models [205], [206]. This problem was studied for the first time in [206] which Challenges and open problems in private information retrieval:…”
Section: Secure Model Aggregation In Federated Learningmentioning
confidence: 99%
“…Specifically, as a result of the client sampling strategy and the users dropouts, the server may be able to recover an individual model by exploiting the history of the aggregate models [205], [206]. This problem was studied for the first time in [206] which Challenges and open problems in private information retrieval:…”
Section: Secure Model Aggregation In Federated Learningmentioning
confidence: 99%
See 1 more Smart Citation