2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00085
|View full text |Cite
|
Sign up to set email alerts
|

Mixed-Privacy Forgetting in Deep Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
52
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 72 publications
(52 citation statements)
references
References 7 publications
0
52
0
Order By: Relevance
“…In these methods, the influence of the forget data on the model is approximated with a Newton step and a random noise in injected to the training objective function. The second group [Golatkar et al, 2020a;Golatkar et al, 2020b;Golatkar et al, 2021] requires access to the rest of the training data (excluding the forget data). These methods use the Fisher information and inject optimal noise to the model weights to achieve unlearning.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In these methods, the influence of the forget data on the model is approximated with a Newton step and a random noise in injected to the training objective function. The second group [Golatkar et al, 2020a;Golatkar et al, 2020b;Golatkar et al, 2021] requires access to the rest of the training data (excluding the forget data). These methods use the Fisher information and inject optimal noise to the model weights to achieve unlearning.…”
Section: Related Workmentioning
confidence: 99%
“…Most of the solutions are limited to simple linear and logistic regression. Few methods [Golatkar et al, 2020a;Golatkar et al, 2020b;Golatkar et al, 2021] have been proposed that can forget information from the CNN network weights with reasonable success in small scale to large scale vision problems. Unlearning in CNN is quite difficult due to its vastly non-convex loss-landscape which makes it difficult to model the effect of a data sample on the optimization trajectory and the final network weights configuration.…”
Section: Introductionmentioning
confidence: 99%
“…Comparability: Metrics should enable choosing between unlearning procedures in the presence of differing training procedures and architectures as such changes are often used to enable better unlearning [11,[27][28][29][30]35]. As mentioned in Table 1, L2-weights cannot be compared if there is any architectural modification.…”
Section: Approachmentioning
confidence: 99%
“…Many different black-box formulations of MIA have been used to measure the efficacy of unlearning. Most [27,29,30,45] learn a binary attack classifier: based on the model's output for the sample, was the sample in the seen training set (class 0) or the unseen test set (class 1)? The attack classifier is then applied on deletion set samples, with ideal unlearning entailing all samples are classified as unseen.…”
Section: B1 Backgroundmentioning
confidence: 99%
See 1 more Smart Citation