2022
DOI: 10.48550/arxiv.2205.08096
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an Incompetent Teacher

Abstract: Machine unlearning has become an important field of research due to an increasing focus on addressing the evolving data privacy rules and regulations into the machine learning (ML) applications. It facilitates the request for removal of certain set or class of data from the already trained ML model without retraining from scratch. Recently, several efforts have been made to perform unlearning in an effective and efficient manner. We propose a novel machine unlearning method by exploring the utility of competen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…Once the data of certain patients is confirmed to be used to train the target DL model by auditing, forgetting requires the removal of learnt information of certain patients' data from the target DL model, which is also called machine unlearning, while auditing could act as the verification of machine unlearning [18] In order to achieve forgetting, existing unlearning methods could be classified into three major classes, including model-agnostic methods, model-intrinsic methods and data-driven methods [20]. Model-agnostic methods refer to algorithms or frameworks that can be used for different DL models, including differential privacy [18], [21], [22], certified removal [23], [24], [25], statistical query learning [6], decremental learning [26], knowledge adaptation [27], [28] and parameter sampling [29]. Model-intrinsic approaches are those methods designed for specific types of models, such as for softmax classifiers [30], linear models [31], treebased models [32] and Bayesian models [19].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Once the data of certain patients is confirmed to be used to train the target DL model by auditing, forgetting requires the removal of learnt information of certain patients' data from the target DL model, which is also called machine unlearning, while auditing could act as the verification of machine unlearning [18] In order to achieve forgetting, existing unlearning methods could be classified into three major classes, including model-agnostic methods, model-intrinsic methods and data-driven methods [20]. Model-agnostic methods refer to algorithms or frameworks that can be used for different DL models, including differential privacy [18], [21], [22], certified removal [23], [24], [25], statistical query learning [6], decremental learning [26], knowledge adaptation [27], [28] and parameter sampling [29]. Model-intrinsic approaches are those methods designed for specific types of models, such as for softmax classifiers [30], linear models [31], treebased models [32] and Bayesian models [19].…”
Section: Introductionmentioning
confidence: 99%
“…When forgetting is accomplished, auditing is the next necessary step to verify it. Different metrics have been proposed to audit the membership of the query dataset, including accuracy, completeness [6], unlearn time, relearn time, retrain time, layer-wise distance, activation distance, JS-divergence, membership inference [37], [38], ZRF score [27], epistemic uncertainty [39] and model inversion attack [7]. In recent studies, membership inference-based metrics were frequently utilized to determine whether or not any information about the samples to be forgotten was retained in the model in intelligent healthcare [38].…”
Section: Introductionmentioning
confidence: 99%