Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/660
|View full text |Cite
|
Sign up to set email alerts
|

Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness

Abstract: In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples. We first relax the constraint of the privacy budget in the traditional Gaussian Mechanism from (0, 1] to (0, ∞), with a new bound of the noise scale to preserve differential privacy. The noise in our mechanism can be arbitrarily redistributed, offering a distinctive ability to address the trade-off between model utility and pri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(20 citation statements)
references
References 27 publications
0
20
0
Order By: Relevance
“…PixelDP cannot effectively preserve privacy in the training set as the input changes are restricted to "a small number of pixels" [74]. Phan et al [125] proposed a heterogeneous Gaussian Mechanism (HGM) that can preserve DP in training data and provide provable robustness against adversarial examples at the same time. They further proposed the stochastic batch mechanism in [123] that can retain higher model utility and is more scalable to large DNNs and datasets, compared with HGM.…”
Section: Defending Ml-based Privacy Attack: Adversarial Examplesmentioning
confidence: 99%
“…PixelDP cannot effectively preserve privacy in the training set as the input changes are restricted to "a small number of pixels" [74]. Phan et al [125] proposed a heterogeneous Gaussian Mechanism (HGM) that can preserve DP in training data and provide provable robustness against adversarial examples at the same time. They further proposed the stochastic batch mechanism in [123] that can retain higher model utility and is more scalable to large DNNs and datasets, compared with HGM.…”
Section: Defending Ml-based Privacy Attack: Adversarial Examplesmentioning
confidence: 99%
“…Noise could be added to the weights in each iteration of training. However, this method might [67] strong low Gradient [65], [68], [69] strong low Weights [70], [71] very strong very high Classes [72], [73], [74] very strong low affect convergence, since the output of the algorithm is computed based on the weights. Hence, if noise is added to each weight, the total amount of noise might become large enough to make the loss never convergent.…”
Section: Differential Privacy In Deep Neural Networkmentioning
confidence: 99%
“…Phan et al [71] proposed a heterogeneous Gaussian mechanism to preserve privacy in deep neural networks. Unlike a regular Gaussian mechanism, this heterogeneous Gaussian mechanism can arbitrarily redistribute noise from the first hidden layer and the gradient of the model to achieve an ideal trade-off between model utility and privacy loss.…”
Section: Differential Privacy In Deep Neural Networkmentioning
confidence: 99%
“…Laplace mechanism (Dwork and Roth 2014) and Extended Gaussian mechanism (Phan et al 2019) are common techniques for achieving differential privacy, both of which add random noise calibrated to the sensitivity of the query function q.…”
Section: Definition 2 (( δ)-Differential Privacy) a Randomizedmentioning
confidence: 99%