Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security 2021
DOI: 10.1145/3460120.3485668
|View full text |Cite
|
Sign up to set email alerts
|

AHEAD: Adaptive Hierarchical Decomposition for Range Query under Local Differential Privacy

Abstract: For protecting users' private data, local differential privacy (LDP) has been leveraged to provide the privacy-preserving range query, thus supporting further statistical analysis. However, existing LDPbased range query approaches are limited by their properties, i.e., collecting user data according to a pre-defined structure. These static frameworks would incur excessive noise added to the aggregated data especially in the low privacy budget setting. In this work, we propose an Adaptive Hierarchical Decomposi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 18 publications
(5 citation statements)
references
References 64 publications
0
5
0
Order By: Relevance
“…However, the algorithm in [8] only applies to learning algorithms that can be transformed into summation form, limiting itself not for neural networks. Recently, Ginart et al [19] have proposed the notion of (đťś–, 𝛿)approximate unlearning in a way reminiscent of DP [15,16,54,65,66]. It guarantees that the output distribution of the unlearned model is close to the model trained without the revoked samples.…”
Section: Related Workmentioning
confidence: 99%
“…However, the algorithm in [8] only applies to learning algorithms that can be transformed into summation form, limiting itself not for neural networks. Recently, Ginart et al [19] have proposed the notion of (đťś–, 𝛿)approximate unlearning in a way reminiscent of DP [15,16,54,65,66]. It guarantees that the output distribution of the unlearned model is close to the model trained without the revoked samples.…”
Section: Related Workmentioning
confidence: 99%
“…Ono et al [44] integrated differential privacy [71], [68], [62] into the distributed RL algorithm to defend the extraction. The local models report noisy gradients designed to satisfy local differential privacy [13], [14], [64], [70], i.e., keeping the local information from being exploited by adversarial reverse engineering. Chen et al [8] proposed a novel testing framework for deep learning copyright protection, which can be adjusted to detect the knowledge extraction against DRL.…”
Section: Related Workmentioning
confidence: 99%
“…Differential privacy [35] is a promising approach to enforcing privacy regulations [26], providing strong statistical privacy guarantees. However, being statistical, these guarantees may be practically insufficient or of limited usability depending on the data type, the size of datasets, and the queries considered [33,34,76].…”
Section: Related Workmentioning
confidence: 99%