2021
DOI: 10.48550/arxiv.2110.11794
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Federated Unlearning via Class-Discriminative Pruning

Abstract: We explore the problem of selectively forgetting categories from trained CNN classification models in the federated learning (FL). Given that the data used for training cannot be accessed globally in FL, our insights probe deep into the internal influence of each channel. Through the visualization of feature maps activated by different channels, we observe that different channels have a varying contribution to different categories in image classification. Inspired by this, we propose a method for scrubbing the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…In many works [1,2,6,8,10,11,18,29], retraining the model from scratch on the remaining data D r is considered the optimal solution, since it does not use the sensitive data points D f during training and achieves high performance on D r . However, for large models and datasets retraining from scratch is computationally expensive, which is why this is often considered impractical.…”
Section: Retrainingmentioning
confidence: 99%
See 3 more Smart Citations
“…In many works [1,2,6,8,10,11,18,29], retraining the model from scratch on the remaining data D r is considered the optimal solution, since it does not use the sensitive data points D f during training and achieves high performance on D r . However, for large models and datasets retraining from scratch is computationally expensive, which is why this is often considered impractical.…”
Section: Retrainingmentioning
confidence: 99%
“…These three aspects form the foundation for evaluating Machine Unlearning algorithms and are widely agreed on as they are stated explicitly or implicitly in many works in the domain [2,3,4,6,7,8,9,22,24,27,29]. Here, we make use of the terminology as stated by Warnecke et al in [30].…”
Section: Measuring the Success Of Forgettingmentioning
confidence: 99%
See 2 more Smart Citations
“…Existing work in this area, [7,8,9,11,12] show effectiveness only for image classification. There is a need for a generic unlearning method that can work across different applications domains.…”
Section: Introductionmentioning
confidence: 99%