2021
DOI: 10.48550/arxiv.2111.08947
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fast Yet Effective Machine Unlearning

Abstract: Unlearning the data observed during the training of a machine learning (ML) model is an important task that can play a pivotal role in fortifying the privacy and security of ML-based applications. This paper raises the following questions: (i) can we unlearn a class/classes of data from a ML model without looking at the full training data even once? (ii) can we make the process of unlearning fast and scalable to large datasets, and generalize it to different deep networks? We introduce a novel machine unlearni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…One direction is to extend the study to methods for more complex models. While the proposed unlearning pipeline can be extended to non-linear ML models such as neural networks with appropriate unlearning methods [37][38][39], extending the experimental evaluation (Section 6) is non-trivial. This is due to the stochasticity of learning [11] for non-linear ML models, which results in many local minima (i.e., several equally valid, fully retrained models with different accuracies).…”
Section: Discussionmentioning
confidence: 99%
“…One direction is to extend the study to methods for more complex models. While the proposed unlearning pipeline can be extended to non-linear ML models such as neural networks with appropriate unlearning methods [37][38][39], extending the experimental evaluation (Section 6) is non-trivial. This is due to the stochasticity of learning [11] for non-linear ML models, which results in many local minima (i.e., several equally valid, fully retrained models with different accuracies).…”
Section: Discussionmentioning
confidence: 99%
“…Unlearning scenarios can vary depending on the requirements (Nguyen et al 2022). Traditional machine unlearning approaches assume that all training data can be accessed (Gupta et al 2021;Tarun et al 2021;Baumhauer, Schöttle, and Zeppelzauer 2022;Ginart et al 2019). However, recent studies have presented problem formulations in which access to the data is highly restricted (Yoon et al 2022;Chundawat et al 2022;Golatkar, Achille, and Soatto 2020;Nguyen, Low, and Jaillet 2020).…”
Section: Related Work Machine Unlearningmentioning
confidence: 99%
“…Machine unlearning aims to erase the target data from a pre-trained machine-learning model, which can be required to remove private information, harmful content, and biased information (Cao and Yang 2015). However, most of the machine unlearning methods have been focused on supervised models so far (Gupta et al 2021;Tarun et al 2021;Baumhauer, Schöttle, and Zeppelzauer 2022;Ginart et al 2019;Yoon et al 2022;Chundawat et al 2022;Golatkar, Achille, and Soatto 2020;Nguyen, Low, and Jaillet 2020).…”
Section: Introductionmentioning
confidence: 99%
“…Model-intrinsic approaches are those methods designed for specific types of models, such as for softmax classifiers [30], linear models [31], treebased models [32] and Bayesian models [19]. Data-driven approaches focus on the data itself, including data partitioning [18], data augmentation [33], [34], [35] and other unlearning strategies based on data influence [36]. All methods have their specific application scenarios and limitations.…”
Section: Introductionmentioning
confidence: 99%