Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security 2020
DOI: 10.1145/3372297.3417880
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing Information Leakage of Updates to Natural Language Models

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
54
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 61 publications
(63 citation statements)
references
References 6 publications
1
54
0
Order By: Relevance
“…Training data extraction refers to the risk of partially extracting training samples by interacting with a trained language model [80,12,99,13]. An adversary can use use membership inference attacks as an oracle to generate sentence samples that have a high chance to be in the training set.…”
Section: Training Data Extractionmentioning
confidence: 99%
“…Training data extraction refers to the risk of partially extracting training samples by interacting with a trained language model [80,12,99,13]. An adversary can use use membership inference attacks as an oracle to generate sentence samples that have a high chance to be in the training set.…”
Section: Training Data Extractionmentioning
confidence: 99%
“…For instance, several approaches have been proposed in recent years for 'machine unlearning', allowing to erase data from already trained models [19,9,12,76]. However, recent results have also shown that: 1) it is possible to reveal details from an initial dataset even when a model was subsequently retrained on a redacted version [85];…”
Section: Technical Issuesmentioning
confidence: 99%
“…However, in many cases, a recommender system also needs to forget certain sensitive data and its complete lineage, which is called Recommendation Unlearning in this paper. Consider privacy first, recent researches have shown that users' sensitive information could be leaked from the trained models, e.g., recommender systems [50], big pretrained [4] and finetuned natural language models [49]. In such cases, users desire a tool to erase the impacts of their sensitive information from the trained models.…”
Section: Introductionmentioning
confidence: 99%