2021
DOI: 10.48550/arxiv.2111.11869
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Machine unlearning via GAN

Abstract: Machine learning models, especially deep models, may unintentionally remember information about their training data. Malicious attackers can thus pilfer some property about training data by attacking the model via membership inference attack or model inversion attack. Some regulations, such as the EU's GDPR, have enacted "The Right to Be Forgotten" to protect users' data privacy, enhancing individuals' sovereignty over their data. Therefore, removing training data information from a trained model has become a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…• Model optimization for the computational resources used by unlearning techniques by using techniques such as model compression, model quantization or model approximation. These techniques can reduce the computational overhead of unlearning methods and enable their deployment in resource-constrained environments or on lowpower devices [19], [199].…”
Section: F Resource Constraintsmentioning
confidence: 99%
“…• Model optimization for the computational resources used by unlearning techniques by using techniques such as model compression, model quantization or model approximation. These techniques can reduce the computational overhead of unlearning methods and enable their deployment in resource-constrained environments or on lowpower devices [19], [199].…”
Section: F Resource Constraintsmentioning
confidence: 99%
“…A simple and naive solution is to delete the malicious data samples from the training data and retrain the model, which is usually quite time-consuming due to the huge computational load. There are several solutions to address this problem in an efficient way [14][15][16][17]. For example, Cao [18] divided the training data into several groups, removed the group with malicious data samples, and updated the whole model accordingly.…”
Section: Related Workmentioning
confidence: 99%