2021
DOI: 10.48550/arxiv.2111.01432
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Practical and Light-weight Secure Aggregation for Federated Submodel Learning

Abstract: Recently, Niu, et. al. [37] introduced a new variant of Federated Learning (FL), called Federated Submodel Learning (FSL). Different from traditional FL, each client locally trains the submodel (e.g., retrieved from the servers) based on its private data and uploads a submodel at its choice to the servers. Then all clients aggregate all their submodels and finish the iteration. Inevitably, FSL introduces two privacy-preserving computation tasks, i.e., Private Submodel Retrieval (PSR) and Secure Submodel Aggre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…In [84], gradient sparsification technique is adopted to compress the users' model. Besides, SecAggbased PPAgg protocols for federated submodel learning can be found in [85] and [86]. However, the above-mentioned works rely on the SecAgg scheme for aggregation and thus still involve high communication overheads when it comes to large-scale FL systems.…”
Section: Masking-based Aggregationmentioning
confidence: 99%
See 1 more Smart Citation
“…In [84], gradient sparsification technique is adopted to compress the users' model. Besides, SecAggbased PPAgg protocols for federated submodel learning can be found in [85] and [86]. However, the above-mentioned works rely on the SecAgg scheme for aggregation and thus still involve high communication overheads when it comes to large-scale FL systems.…”
Section: Masking-based Aggregationmentioning
confidence: 99%
“…The HE-based aggregation in FL is relatively straightforward than that of masking-based aggregation. To aggregate Model quantization;TEE [80] [81] [83] Model quantization [84] Coding approach [82] Model spasification [85] [86] Determined by the size of submodels Submodel aggregation [87] Methods of guarantee the aggregation correctness [112] Semi-honest users; Malicious server [113] Malicious users; Semi-honest server Additional n communication rounds for following the chain structure Chain structure the sum of users' locally trained models in an FL round, the users need to encrypt their models and send them to the central server. Then the central server adds the received encrypted models together, relying on the additive homomorphic property of the used crypto-system, which can be decrypted to obtain the global model in that FL round (see Figure 5).…”
Section: He-based Aggregationmentioning
confidence: 99%
“…In [79], gradient sparsification technique is adopted to compress the users' model. Besides, SecAgg-based PPAgg protocols for federated submodel learning can be found in [80] and [81]. However, the above-mentioned works rely on the SecAgg scheme for aggregation, and thus still involve high communication overheads when it comes to large-scale FL systems.…”
Section: Masking-based Aggregationmentioning
confidence: 99%
“…TEE in [74], pseudo-random functions in [103], MAC-like technique in [104], homomorphic hash in [106], zero-knowledge proofs in [107], and commitment scheme in [105] are deployed to guarantee that the server correctly aggregates the sum from FL users. Model Quantization;TEE [75] [76] [78] Model Quantization [79] Coding approach [77] Model spasification [80] [81] Determined by the size of submodels Submodel aggregation [82] So far, we have reviewed masking-based aggregation protocols to protect users' model privacy and global model privacy. We should note that aggregation protocols based on pair-wise masking allow efficient unmasking with dropped users, and thus are suitable for the cross-device setting where FL users are mobile IoT devices that may drop out of the system at any time.…”
Section: Masking-based Aggregationmentioning
confidence: 99%
“…In [51], the gradient sparsification technique is adopted to compress the users' model. Besides, SecAgg-based PPAgg protocols for federated submodel learning can be found in [52] and [53]. However, the above-mentioned works rely on the SecAgg scheme for aggregation and thus still involve high communication overheads when it comes to large-scale FL systems.…”
Section: Secure Distributed Machine Learningmentioning
confidence: 99%