Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing 2022
DOI: 10.18653/v1/2022.emnlp-main.323
|View full text |Cite
|
Sign up to set email alerts
|

Differentially Private Language Models for Secure Data Sharing

Abstract: To protect the privacy of individuals whose data is being shared, it is of high importance to develop methods allowing researchers and companies to release textual data while providing formal privacy guarantees to its originators. In the field of NLP, substantial efforts have been directed at building mechanisms following the framework of local differential privacy, thereby anonymizing individual text samples before releasing them. In practice, these approaches are often dissatisfying in terms of the quality o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 49 publications
0
1
0
Order By: Relevance
“…There is extensive research on differentially private training or fine-tuning of language models (Kerrigan et al, 2020;Yu et al, 2021;Anil et al, 2022;Mattern et al, 2022a). They aim to make language models resistant to various kinds of data leakage attacks (Carlini et al, 2019(Carlini et al, , 2021Deng et al, 2021;Balunovic et al, 2022).…”
Section: Differentially Private Training/fine Tuningmentioning
confidence: 99%
“…There is extensive research on differentially private training or fine-tuning of language models (Kerrigan et al, 2020;Yu et al, 2021;Anil et al, 2022;Mattern et al, 2022a). They aim to make language models resistant to various kinds of data leakage attacks (Carlini et al, 2019(Carlini et al, , 2021Deng et al, 2021;Balunovic et al, 2022).…”
Section: Differentially Private Training/fine Tuningmentioning
confidence: 99%