2020
DOI: 10.48550/arxiv.2002.09609
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Private Stochastic Convex Optimization: Efficient Algorithms for Non-smooth Objectives

Abstract: In this paper, we revisit the problem of private stochastic convex optimization. We propose an algorithm, based on noisy mirror descent, which achieves optimal rates up to a logarithmic factor, both in terms of statistical complexity and number of queries to a first-order stochastic oracle. Unlike prior work, we do not require Lipschitz continuity of stochastic gradients to achieve optimal rates. Our algorithm generalizes beyond the Euclidean setting and yields anytime utility and privacy guarantees.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…The gradient descent-based output perturbation method of [ZZMW17] achieves with probability 1−γ, excess population loss of O RL 1/2 (βd) 1/4 (nǫγ) 1/2 , but they do not provide a guarantee for expected excess population loss. For δ > 0, [BFTGT19], [FKT20], and [AMU20] all nearly achieve the optimal (by [BFTGT19]) expected excess population loss bound O LR…”
Section: Smooth Convex Lipschitz Functionsmentioning
confidence: 99%
See 1 more Smart Citation
“…The gradient descent-based output perturbation method of [ZZMW17] achieves with probability 1−γ, excess population loss of O RL 1/2 (βd) 1/4 (nǫγ) 1/2 , but they do not provide a guarantee for expected excess population loss. For δ > 0, [BFTGT19], [FKT20], and [AMU20] all nearly achieve the optimal (by [BFTGT19]) expected excess population loss bound O LR…”
Section: Smooth Convex Lipschitz Functionsmentioning
confidence: 99%
“…More recently, several works have also considered private stochastic convex optimization (SCO), where the goal is to minimize the expected population loss F (w, X) = E x∼D [f (w, x)], given access to n i.i.d. samples FKT20,AMU20]. However, the algorithms in these works are only differentially private for δ > 0, which, as discussed earlier, provides substantially weaker privacy guarantees.…”
mentioning
confidence: 99%
“…However, the convergence rates of these differentially private (DP) versions of SGD suffer in most regularized optimization problems, particularly when working with non-smooth regularizers, such as the widely used ℓ 1 penalty for managing sparse, high-dimensional settings (Shamir and Zhang, 2013). Recently, DP variants of mirror descent and Frank-Wolfe Statistica Sinica: Newly accepted Paper (accepted author-version subject to English editing) algorithms have been developed to address this problem (Wang et al, 2018;Arora et al, 2020;Asi et al, 2021;Kulkarni et al, 2021). Additionally, Gopi et al (2022) proposed a modification to the exponential mechanism, which allows for the optimal empirical and population risk in private solving of non-smooth objective functions.…”
Section: Introductionmentioning
confidence: 99%