Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security 2023
DOI: 10.1145/3576915.3616592
|View full text |Cite
|
Sign up to set email alerts
|

DP-Forward: Fine-tuning and Inference on Language Models with Differential Privacy in Forward Pass

Minxin Du,
Xiang Yue,
Sherman S. M. Chow
et al.

Abstract: Differentially private stochastic gradient descent (DP-SGD) adds noise to gradients in back-propagation, safeguarding training data from privacy leakage, particularly membership inference. It fails to cover (inference-time) threats like embedding inversion and sensitive attribute inference. It is also costly in storage and computation when used to fine-tune large pre-trained language models (LMs).We propose DP-Forward, which directly perturbs embedding matrices in the forward pass of LMs. It satisfies stringen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 58 publications
0
1
0
Order By: Relevance
“…Since the initial appearance of our paper, private fine-tuning (of language models and beyond) has become perhaps the standard paradigm for doing private machine learning in many settings (Bu et al, 2022;Wu et al, 2024;Du et al, 2023;Pelikan et al, 2023). He et al (2023) explored larger-scale settings, including private fine-tuning of GPT-3.…”
Section: Related Workmentioning
confidence: 99%
“…Since the initial appearance of our paper, private fine-tuning (of language models and beyond) has become perhaps the standard paradigm for doing private machine learning in many settings (Bu et al, 2022;Wu et al, 2024;Du et al, 2023;Pelikan et al, 2023). He et al (2023) explored larger-scale settings, including private fine-tuning of GPT-3.…”
Section: Related Workmentioning
confidence: 99%