2023
DOI: 10.48550/arxiv.2301.06412
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Enforcing Privacy in Distributed Learning with Performance Guarantees

Abstract: We study the privatization of distributed learning and optimization strategies. We focus on differential privacy schemes and study their effect on performance. We show that the popular additive random perturbation scheme degrades performance because it is not well-tuned to the graph structure. For this reason, we exploit two alternative graph-homomorphic constructions and show that they improve performance while guaranteeing privacy. Moreover, contrary to most earlier studies, the gradient of the risks is not … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 26 publications
(47 reference statements)
0
1
0
Order By: Relevance
“…The privacy-accuracy tradeoff can be improved by perturbing the shared information using correlated noise sequences with decaying variances [19]- [21]. In [23], a graph topology-aware noise injection-based distributed learning strategy is introduced. It effectively cancels out the added noise during local aggregation steps, thereby enhancing the performance and privacy of the distributed learning algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…The privacy-accuracy tradeoff can be improved by perturbing the shared information using correlated noise sequences with decaying variances [19]- [21]. In [23], a graph topology-aware noise injection-based distributed learning strategy is introduced. It effectively cancels out the added noise during local aggregation steps, thereby enhancing the performance and privacy of the distributed learning algorithm.…”
Section: Introductionmentioning
confidence: 99%