2020
DOI: 10.48550/arxiv.2012.03893
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sample-efficient proper PAC learning with approximate differential privacy

Abstract: In this paper we prove that the sample complexity of properly learning a class of Littlestone dimension d with approximate differential privacy is Õpd 6 q, ignoring privacy and accuracy parameters. This result answers a question of Bun et al. (FOCS 2020) by improving upon their upper bound of 2 Opdq on the sample complexity. Prior to our work, finiteness of the sample complexity for privately learning a class of finite Littlestone dimension was only known for improper private learners, and the fact that our le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…We obtain, though, a strong deterioration in terms of the Littlestone dimension -sublinear dependece vs. double exponential dependence. As discussed, Ghazi et al [20] improved the dependence in the batch case to polynomial, and it remains an open question if similar improvement is applicable in the online case. We next turn to the adversarial case Theorem 4.2 (Private Adaptive online-learning).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…We obtain, though, a strong deterioration in terms of the Littlestone dimension -sublinear dependece vs. double exponential dependence. As discussed, Ghazi et al [20] improved the dependence in the batch case to polynomial, and it remains an open question if similar improvement is applicable in the online case. We next turn to the adversarial case Theorem 4.2 (Private Adaptive online-learning).…”
Section: Resultsmentioning
confidence: 99%
“…In addition to [11,20] which establish private learning algorithms for classes with finite Littlestone dimension in the i.i.d. (offline) setting, there has been an extensive line of work on private learning algorithms in the offline setting: [29,7,5,19] study the complexity of private learning with pure differential privacy, [26,9,10,4] study the sample complexity of privately learning thresholds, and [27,28,6] study the sample complexity of privately learning halfspaces.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…2. In a very recent work, Ghazi et al [GGKM20] improved upon the result of Bun et al [BLM20] by showing that a polynomial blow-up in sample complexity suffices in going from online learning to differential privacy, which is exponentially better than the result of Bun et al [BLM20].…”
Section: A Better Bound On Sfat(•)mentioning
confidence: 96%