2022
DOI: 10.1109/tdsc.2020.3043382
|View full text |Cite
|
Sign up to set email alerts
|

Protecting Decision Boundary of Machine Learning Model With Differentially Private Perturbation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…Foolsgold [5], examines historical updates for each client and penalizes those with high pairwise cosine similarities by employing a low learning rate. Another avenue of research focuses on robust defense against FL backdoor attacks by applying weak Differential Privacy (DP) [32] to the global model [31]. Weak DP, involving norm clipping and the addition of Gaussian noise to each gradient update, has proven effective in mitigating FL backdoor attacks [27].…”
Section: Backdoor Attacks and Defenses In Flmentioning
confidence: 99%
“…Foolsgold [5], examines historical updates for each client and penalizes those with high pairwise cosine similarities by employing a low learning rate. Another avenue of research focuses on robust defense against FL backdoor attacks by applying weak Differential Privacy (DP) [32] to the global model [31]. Weak DP, involving norm clipping and the addition of Gaussian noise to each gradient update, has proven effective in mitigating FL backdoor attacks [27].…”
Section: Backdoor Attacks and Defenses In Flmentioning
confidence: 99%
“…After that, the LDP technique has been widely applied in the industry to protect their users' privacy, like iOS for Apple [60] Win 10 for Microsoft [14], and Samsung [47]. Since the concept of LDP was proposed, it has been widely applied in multiple fields, including multi-attribute values estimation [18], [53] marginal release [13], [73], time series data release [69], graph data collection [59], [67], [68], key-value data collection [28], [70], [71] and private learning [74], [75].…”
Section: Related Workmentioning
confidence: 99%
“…Several approaches suggest perturbing or adding noise to the prediction results to prevent the adversary from executing the (supervised) retraining process to reconstruct the model [6,41,64]. This can be achieved with Differential Privacy to hide the decision boundary between prediction labels regardless of how many queries are executed by the adversary [73]. Another approach is to poison the training objective of the adversary by actively perturbing the predictions without impacting the utility for benign users [53].…”
Section: E Mitigating Model Stealing Attacksmentioning
confidence: 99%