2022
DOI: 10.48550/arxiv.2202.06825
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Estimation of Discrete Distributions under Local Differential Privacy

Abstract: Although robust learning and local differential privacy are both widely studied fields of research, combining the two settings is an almost unexplored topic. We consider the problem of estimating a discrete distribution in total variation from n contaminated data batches under a local differential privacy constraint. A fraction 1 − of the batches contain k i.i.d. samples drawn from a discrete distribution p over d elements. To protect the users' privacy, each of the samples is privatized using an α-locally dif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…The bounds for this problem [16,2] are comparable to ours in the particular scenario where each worker holds a single data point and the algorithm is non-interactive (can query each worker once). Although a recent paper [17] considered a more general case where workers hold a batch of data points, the algorithm was still assumed non-interactive, and the data distribution identical for all the workers. It was also shown recently [48] that local DP and robustness are disentangled when the adversarial workers corrupt the data before randomization only, which however need not be the case in general.…”
Section: Prior Workmentioning
confidence: 99%
“…The bounds for this problem [16,2] are comparable to ours in the particular scenario where each worker holds a single data point and the algorithm is non-interactive (can query each worker once). Although a recent paper [17] considered a more general case where workers hold a batch of data points, the algorithm was still assumed non-interactive, and the data distribution identical for all the workers. It was also shown recently [48] that local DP and robustness are disentangled when the adversarial workers corrupt the data before randomization only, which however need not be the case in general.…”
Section: Prior Workmentioning
confidence: 99%
“…A key distinction between our work and the aforementioned robust procedures in the central model is that there it is possible to add noise after computing robust estimators, while in local privacy the requirement to add noise to each observation separately means that the approaches taken in the two models are fundamentally different. Note that some very recent works (Cheu, Smith and Ullman, 2021;Acharya, Sun and Zhang, 2021;Chhor and Sentenac, 2022) consider contamination after the privatisation step and the results therein feature an interaction of privacy level α and contamination level ε. In our work, we suppose that contamination happens before the data are sent for privatisation.…”
mentioning
confidence: 99%