2021
DOI: 10.1016/j.knosys.2021.106795
|View full text |Cite
|
Sign up to set email alerts
|

Differentially private regression analysis with dynamic privacy allocation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 53 publications
0
4
0
Order By: Relevance
“…These findings underscore the effectiveness of Gaussian process-based differential privacy mechanisms in safeguarding data privacy while maintaining usability [19]. Practically, this suggests that adjusting differential privacy algorithm parameters can optimize the balance between privacy protection and data usability, maximizing data value retention while securing user privacy.…”
Section: Effectiveness Of Differential Privacy Protectionmentioning
confidence: 74%
“…These findings underscore the effectiveness of Gaussian process-based differential privacy mechanisms in safeguarding data privacy while maintaining usability [19]. Practically, this suggests that adjusting differential privacy algorithm parameters can optimize the balance between privacy protection and data usability, maximizing data value retention while securing user privacy.…”
Section: Effectiveness Of Differential Privacy Protectionmentioning
confidence: 74%
“…These works apply DP during the training to produce PP models. Pan et al [66] present adaptive differentially private regression (ADPR) mechanism, a dynamic privacy noise allocation mechanism that takes into account the relevance of the input attributes to the outputs. The mechanism consists of adding Laplace noise drawn from Lap( ∆ f ϵj ) into the polynomial coefficients of the objective function.…”
Section: Laplace and Gaussianmentioning
confidence: 99%
“…Although this approach gives better accuracy compared to [67]- [69], [83], it is costly in terms of computation as it has to run a pre-processing learning step to determine the relevance of each attribute. The approaches of [66] and [67] are the same, except that [67] adds noise with the same privacy budget which may decrease the model's accuracy.…”
Section: Laplace and Gaussianmentioning
confidence: 99%
See 1 more Smart Citation