Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/137
|View full text |Cite
|
Sign up to set email alerts
|

Learning with Selective Forgetting

Abstract: Lifelong learning aims to train a highly expressive model for a new task while retaining all knowledge for previous tasks. However, many practical scenarios do not always require the system to remember all of the past knowledge. Instead, ethical considerations call for selective and proactive forgetting of undesirable knowledge in order to prevent privacy issues and data leakage. In this paper, we propose a new framework for lifelong learning, called Learning with Selective Forgetting, which is to update a mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(16 citation statements)
references
References 12 publications
0
16
0
Order By: Relevance
“…Does not penalize better utility: We need evaluation methods to not penalize better utility. Consider removing a random subset of samples and computing error on the deletion set [27,35,56], with better unlearning models having a higher error. Such an evaluation can not differentiate between an unlearnt model that generalizes well to the (now unseen) deletion set versus one that does not unlearn at all.…”
Section: Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…Does not penalize better utility: We need evaluation methods to not penalize better utility. Consider removing a random subset of samples and computing error on the deletion set [27,35,56], with better unlearning models having a higher error. Such an evaluation can not differentiate between an unlearnt model that generalizes well to the (now unseen) deletion set versus one that does not unlearn at all.…”
Section: Approachmentioning
confidence: 99%
“…The usefulness of a test is established by how reliably it quantifies the remaining information after applying any inexactunlearning method. We compare existing evaluation methods which use random sampling [27,35,39,45,51,56,61,68 5 and state consistent observations. Primarily, the IC test is the only test which reliably determines that exactly unlearning the final layer (1 layer-EU) does not remove generalized properties.…”
Section: Metric Comparisonsmentioning
confidence: 99%
See 1 more Smart Citation
“…The above methods all consider the MU problem in general where the preserved dataset (e.g., data except the removed ones) iis available, which is not the case in continual learning. Recently, a particularly relevant study first considers MU in the context of continual learning (Shibata et al, 2021). However, their problem definition aims to make the model predict as wrongly as possible on the removed data, which does not in general protect the user's privacy.…”
Section: Related Workmentioning
confidence: 99%
“…We call this novel problem continual learning and private unlearning (CLPU). To the best of our knowledge, only one previous paper discusses a similar problem setting pertaining to selective forgetting in continual learning (Shibata et al, 2021). However, the problem in that paper is different from CLPU as it defines forgetting as maximally degrading the performance on a task.…”
Section: Introductionmentioning
confidence: 99%