2022
DOI: 10.48550/arxiv.2201.06820
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Recommendation Unlearning

Abstract: Recommender systems provide essential web services by learning users' personal preferences from collected data. However, in many cases, systems also need to forget some training data. From the perspective of privacy, users desire a tool to erase the impacts of their sensitive data from the trained models. From the perspective of utility, if a system's utility is damaged by some bad data, the system needs to forget such data to regain utility. While unlearning is very important, it has not been well-considered … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(10 citation statements)
references
References 28 publications
0
10
0
Order By: Relevance
“…Without this constraint, the privacy right discussed in this paper is meaningless. To achieve this, the platform can retrain the model from scratch with new data S ′ 𝑖 or quick unlearn the data in S 𝑖 then finetune with data S ′ 𝑖 [5,6,8]. However, the Assumption 2 also raises a new challenge that the asynchronous changes of user policy bring intractable computation costs for the platform since each time the user changes the disclosed data, the platform needs to update the model.…”
Section: Simplified Assumptionsmentioning
confidence: 99%
“…Without this constraint, the privacy right discussed in this paper is meaningless. To achieve this, the platform can retrain the model from scratch with new data S ′ 𝑖 or quick unlearn the data in S 𝑖 then finetune with data S ′ 𝑖 [5,6,8]. However, the Assumption 2 also raises a new challenge that the asynchronous changes of user policy bring intractable computation costs for the platform since each time the user changes the disclosed data, the platform needs to update the model.…”
Section: Simplified Assumptionsmentioning
confidence: 99%
“…These three aspects form the foundation for evaluating Machine Unlearning algorithms and are widely agreed on as they are stated explicitly or implicitly in many works in the domain [2,3,4,6,7,8,9,22,24,27,29]. Here, we make use of the terminology as stated by Warnecke et al in [30].…”
Section: Measuring the Success Of Forgettingmentioning
confidence: 99%
“…When it comes to evaluation, the existing approaches can be divided into three categories. First, there are those approaches that provably guarantee perfect unlearning [1,2,3,4,8,18,27], and therefore do not need any evaluation. However, they often come with strong assumptions, making them only applicable in some specific scenarios.…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…There are basically two kinds of motivations for inducing such a controlled "amnesia" in recommender systems: privacy and utility [4]. First, recent studies have found that it is possible for a recommendation model to leak out users' sensitive information being employed in its training [23].…”
Section: Introductionmentioning
confidence: 99%
“…In fact, the only existing work dedicated to the unlearning of recommendation models (recommendation unlearning in short) that we are aware of is RecEraser which appeared a few months ago [4]. RecEraser belongs to the category of exact unlearning methods that hold perfect privacy guarantees but will struggle to cope with forgetting requests in batches, as we will explain in detail later (see §2.…”
Section: Introductionmentioning
confidence: 99%