2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS) 2020
DOI: 10.1109/focs46700.2020.00044
|View full text |Cite
|
Sign up to set email alerts
|

An Equivalence Between Private Classification and Online Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(47 citation statements)
references
References 26 publications
0
47
0
Order By: Relevance
“…Theorem 6.2 then carries over verbatim to our notion of quantum SQ learnability. This form of quantum differential privacy was recently studied by Arunachalam et al [11], who were able to relate it to online learning, one-way communication complexity, and shadow tomography of quantum states, extending ideas of Bun et al [18]. Since our notion of quantum SQ learnability implies quantum DP learnability, it also fits into their framework.…”
Section: Connections To Differential Privacymentioning
confidence: 79%
See 1 more Smart Citation
“…Theorem 6.2 then carries over verbatim to our notion of quantum SQ learnability. This form of quantum differential privacy was recently studied by Arunachalam et al [11], who were able to relate it to online learning, one-way communication complexity, and shadow tomography of quantum states, extending ideas of Bun et al [18]. Since our notion of quantum SQ learnability implies quantum DP learnability, it also fits into their framework.…”
Section: Connections To Differential Privacymentioning
confidence: 79%
“…Recent work by Arunachalam et al [11] extends work by Bun et al [18] to the quantum setting, and relates differentially private (DP) learning of quantum states to one-way communication, online learning, and other models. We show in Section 6 that our notion of SQ learnability implies their notion of DP learnability, and hence by their results also implies finite sequential fat-shattering dimension, online learnability, and "quantum stability.…”
Section: Related Workmentioning
confidence: 99%
“…This assumption is quite strong, but not completely unreasonable. Specifically, Bun et al [BLM20]in their seminal work that characterizes hypothesis classes learnable in the item-level DP settingshowed that it is possible to come up with a learner that outputs some hypothesis h * with probability 2 −O(d) , where d denotes the Littlestone dimension of the concept class. We may attempt to use this in the approach described above, but this does not work: in order to even see h * at all (with say a constant probability), we would need 2 O(d) users, which is prohibitive!…”
Section: Proof Overviewmentioning
confidence: 99%
“…In this paper we address the question in a very general setting: what can user-level privacy gain for any privately PAC learnable class? Recall that it had recently been shown that a class is learnable via (ε, δ)-DP algorithms iff it is online learnable [ALMM19,BLM20], which is in turn equivalent to the class having a finite Littlestone dimension [Lit87]. Furthermore, it is also known that a class is learnable via ε-DP algorithms iff it has a finite probabilistic dimension [BNS19a].…”
Section: Introductionmentioning
confidence: 99%
“…In the setting of differentially private binary classification, a major recent development [ALMM19,BLM20] is the result that a hypothesis class F consisting of binary classifiers is learnable with approximate differential privacy (Definition 2.1) if and only if it is online learnable, which is known to hold in turn if and only if the Littlestone dimension of F is finite [Lit87, BPS09]. Such an equivalence, however, remains open for the setting of differentially private regression (this question was asked in [BLM20]). The combinatorial parameter characterizing online learnability for regression is the sequential fat-shattering dimension [RST15b] (Definition 2.4), which may be viewed as a scale-sensitive analogue of the Littlestone dimension.…”
Section: Introductionmentioning
confidence: 99%