2022
DOI: 10.1109/tifs.2021.3139267
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Preserved Distributed Learning With Zeroth-Order Optimization

Abstract: We develop a privacy-preserving distributed algorithm to minimize a regularized empirical risk function when the first-order information is not available and data is distributed over a multi-agent network. We employ a zeroth-order method to minimize the associated augmented Lagrangian function in the primal domain using the alternating direction method of multipliers (ADMM). We show that the proposed algorithm, named distributed zeroth-order ADMM (D-ZOA), has intrinsic privacy-preserving properties. Most exist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 44 publications
0
5
0
Order By: Relevance
“…After using the FedAvg algorithm to handle data imbalance, the data is balanced, and privacy protection research can be conducted using the data of each participating user. There is a risk of data theft during the upload process [20][21][22] . To reduce data risk, enhance attack resistance and privacy protection, the study combines CNN and differential privacy to construct a DPAGD-CNN model with adaptive gradient descent.…”
Section: Construction Of Dpagd-cnn Privacy Protection Model On the Gr...mentioning
confidence: 99%
“…After using the FedAvg algorithm to handle data imbalance, the data is balanced, and privacy protection research can be conducted using the data of each participating user. There is a risk of data theft during the upload process [20][21][22] . To reduce data risk, enhance attack resistance and privacy protection, the study combines CNN and differential privacy to construct a DPAGD-CNN model with adaptive gradient descent.…”
Section: Construction Of Dpagd-cnn Privacy Protection Model On the Gr...mentioning
confidence: 99%
“…Privacy-preserving distributed (stochastic) optimization method has recently been studied, including the inherent privacy protection method [39], quantization-enabled privacy protection method [40], and differential privacy method [41]- [47]. An important result that the convergence and differential privacy with a finite cumulative privacy budget ε for an infinite number of iterations hold simultaneously has been given for distributed optimization in [41], but this can not be directly used for distributed stochastic optimization.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the gradient-perturbation mechanism [39] or a stochastic ternary quantization scheme [40], the privacy protection distributed stochastic optimization algorithm with only one iteration was proposed, respectively. Two common methods have been proposed for differential privacy distributed stochastic optimization, namely, gradient-perturbation [42]- [45] and output-perturbation [42], [46], [47]. However, the existing method induces a tradeoff between privacy and accuracy.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations