2019
DOI: 10.1007/978-3-030-37228-6_13
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic ADMM Based Distributed Machine Learning with Differential Privacy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 14 publications
0
15
0
Order By: Relevance
“…The above inequality holds for all γ i , thus it also holds for γ i ∈ {γ i : γ i ≤ β}. By letting γ i be the optimum, we have , which leads 19 to the result:…”
mentioning
confidence: 90%
See 1 more Smart Citation
“…The above inequality holds for all γ i , thus it also holds for γ i ∈ {γ i : γ i ≤ β}. By letting γ i be the optimum, we have , which leads 19 to the result:…”
mentioning
confidence: 90%
“…Differential privacy is a widely used privacy definition [14]- [16] and can be guaranteed in ADMM through adding noise to the exchanged messages. However, in existing studies on ADMMbased distributed learning with differential privacy [1], [2], [17]- [19], noise addition would disrupt the learning process and severely degrade the performance of the trained model, especially when large noise is needed to provide high privacy protection. Besides, their privacy-preserving algorithms only apply to the learning problems with both smoothness and strongly convexity assumptions about the objective functions.…”
Section: Introductionmentioning
confidence: 99%
“…35,36 Gao and Ma 37 proposed an algorithm which combines reinforcement with differential privacy in processing dynamic data. Ding et al 38 put forward a alternating direction method based on differential privacy and reinforcement. Cheng et al 39 proposed a novel stochastic gradient descent algorithm with deep learning and differential privacy.…”
Section: Differential Privacymentioning
confidence: 99%
“…However, in both algorithms, the privacy leakage of an agent is bounded only at a single iteration and an adversary might exploit knowledge available from all iterations to infer sensitive information. This shortcoming is mitigated in [15]- [18]. The works in [15], [16] develop ADMM-based differentially private algorithms with improved accuracy.…”
Section: A Related Workmentioning
confidence: 99%
“…The work in [17] employs the ADMM to develop a distributed algorithm where the primal variable is perturbed by adding a Gaussian noise with diminishing variance to ensure zero-concentrated differential privacy enabling higher accuracy compared to the common (ǫ, δ)-differential privacy. The work in [18] develops a stochastic ADMM-based distributed algorithm that further enhances the accuracy while ensuring differential privacy. The authors of [19]- [21] propose differentially-private distributed algorithms that utilize the projected-gradient-descent method for handling constraints.…”
Section: A Related Workmentioning
confidence: 99%