2020
DOI: 10.1007/978-3-030-58526-6_23
|View full text |Cite
|
Sign up to set email alerts
|

Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
69
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 77 publications
(69 citation statements)
references
References 13 publications
0
69
0
Order By: Relevance
“…Efficiency. Our method is fast and highly efficient in comparison to the existing approaches [19,20]. The Fisher Forgetting [19] and NTK based forgetting [20] approaches require Hessian approximation which is computationally very expensive.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Efficiency. Our method is fast and highly efficient in comparison to the existing approaches [19,20]. The Fisher Forgetting [19] and NTK based forgetting [20] approaches require Hessian approximation which is computationally very expensive.…”
Section: Discussionmentioning
confidence: 99%
“…Our method is fast and highly efficient in comparison to the existing approaches [19,20]. The Fisher Forgetting [19] and NTK based forgetting [20] approaches require Hessian approximation which is computationally very expensive. It took us more than 2 hours to run Fisher forgetting [19] for 1-class unlearning in ResNet18 on CIFAR-10.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…They usually employ gradient based update strategies to quickly eliminate the influence of samples that are requested to be deleted [40]. For example, Guo et al [26], Golatkar et al [24], and Golatkar et al [25] proposed different Newton's methods to approximate retraining for convex models, e.g., linear regression, logistic regression, and the last fully connected layer of a neural network. An alternative is to eliminate the influence of the samples that need to be deleted to the learned model based on 1 It is worth noting that the purpose of unlearning is different from differential privacy (DP) methods [18,20] which aim to protect users' privacy information instead of deleting them.…”
Section: Machine Unlearningmentioning
confidence: 99%