2020
DOI: 10.1007/978-3-030-59410-7_33
|View full text |Cite
|
Sign up to set email alerts
|

FedSel: Federated SGD Under Local Differential Privacy with Top-k Dimension Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
49
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 98 publications
(56 citation statements)
references
References 14 publications
0
49
0
Order By: Relevance
“…In this case, the total domain size increases exponentially, which leads to huge computing costs and low data utility due to the “curse of dimensionality”. The second is protecting high-dimensional parameters of learning models in machine learning, deep learning, or federated learning tasks [ 168 ]. In this case, the scale of injected noise is proportional to the dimension, which will inject heavier noise and result in inaccurate models.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this case, the total domain size increases exponentially, which leads to huge computing costs and low data utility due to the “curse of dimensionality”. The second is protecting high-dimensional parameters of learning models in machine learning, deep learning, or federated learning tasks [ 168 ]. In this case, the scale of injected noise is proportional to the dimension, which will inject heavier noise and result in inaccurate models.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
“…However, the increase of dimension d will cause the privacy budget to decay rapidly and the noise scale to increase, which leads to poor accuracy of the learned model when d is large. Thus, Liu et al [ 168 ] proposed FedSel that only selects the most important top- k dimensions under the premise of stabilizing the learning process. In addition, Sun et al [ 169 ] proposed to mitigate the privacy degradation by splitting and shuffling, which reduces noise variance and improve accuracy.…”
Section: Machine Learning With Ldpmentioning
confidence: 99%
“…However, the perdimension privacy budget becomes extremely small for high-dimensional models, which results in a significant increase of noise. A recent work [37] proposed a twostage LDP-FL framework, which splits the privacy budget into a dimension selection (DS) stage and a value perturbation (VP) stage. In the DS stage, the local update is sorted by absolute value and one "important" dimension is privately selected from the top-k dimensions; in the VP stage, the value of the selected di- Randomly sample a dimension j ∈ {a ∈ {1, • • • , d}|a / ∈ S topk } 11: end if 12: Return j mension is perturbed.…”
Section: Training the Generative Modelmentioning
confidence: 99%
“…Finally, a sparse local update is constructed and returned to the server. Although [37] mitigated the dimension-dependency problem by only selecting one "important" dimension, the privacy budget is still consumed by the two stages. In high-privacy scenarios (where the privacy budget is small), each stage may therefore obtain only an insufficient privacy budget and cause large randomness.…”
Section: Training the Generative Modelmentioning
confidence: 99%
See 1 more Smart Citation