2013 IEEE Global Conference on Signal and Information Processing 2013
DOI: 10.1109/globalsip.2013.6736861
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic gradient descent with differentially private updates

Abstract: Abstract-Differential privacy is a recent framework for computation on sensitive data, which has shown considerable promise in the regime of large datasets. Stochastic gradient methods are a popular approach for learning in the data-rich regime because they are computationally tractable and scalable. In this paper, we derive differentially private versions of stochastic gradient descent, and test them empirically. Our results show that standard SGD experiences high variability due to differential privacy, but … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
385
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 448 publications
(388 citation statements)
references
References 10 publications
3
385
0
Order By: Relevance
“…The example above is typical of what can be expressed in our language, and many variants of machine learning techniques that rely on gradient descent (e.g., as in [Goodfellow et al 2016], and commonly used in systems like TensorFlow) are in scope as well. For instance, there is no difficulty in expressing optimization with momentum, or differentially private stochastic gradient descent (e.g., [Abadi et al 2016b;Song et al 2013]). Probabilistic choice may be treated via random number generators, as is done in practice.…”
Section: Fig 1 Typing Rulesmentioning
confidence: 99%
“…The example above is typical of what can be expressed in our language, and many variants of machine learning techniques that rely on gradient descent (e.g., as in [Goodfellow et al 2016], and commonly used in systems like TensorFlow) are in scope as well. For instance, there is no difficulty in expressing optimization with momentum, or differentially private stochastic gradient descent (e.g., [Abadi et al 2016b;Song et al 2013]). Probabilistic choice may be treated via random number generators, as is done in practice.…”
Section: Fig 1 Typing Rulesmentioning
confidence: 99%
“…Addressing the problems mentioned above, the Stochastic Channel-Based Federated Learning (SCBF) realizes the function of differential privacy preserving by protecting the two sources of potential privacy leakage from federated learning: the actual values of uploaded gradients from the local participants and the mechanism these gradients are chosen ??. By setting a threshold to select the parameters of gradients channel-wise, the actual values uploaded to the server are stored in a sparse tensor that processed from Stochastic Gradient Descent (SGD), a stochastic training process which has already been used for many privacy preserving cases [8], [9]. Besides, the participant could independently choose the update rate for their models, thus making it hard to track the selection of the channels that used for update, especially when they are trained individually using different datasets through stochastic ways.…”
Section: B Differential Privacy Preservingmentioning
confidence: 99%
“…Privacy is achieved in part by assuming that devices do not share raw data either with each other or with any external party. In addition, theoretical notions of privacy such as differential privacy can be incorporated in this framework as well [4], [6], [7]. In-place data processing could also provide better scalability for certain tasks in the limit of extremely large systems compared to cloud-based solutions by exploiting local resources and networks, as proposed e.g.…”
Section: Introductionmentioning
confidence: 99%