2021
DOI: 10.1007/s11257-020-09285-1
|View full text |Cite
|
Sign up to set email alerts
|

A flexible framework for evaluating user and item fairness in recommender systems

Abstract: One common characteristic of research works focused on fairness evaluation (in machine learning) is that they call for some form of parity (equality) either in treatment -meaning they ignore the information about users' memberships in protected classes during training -or in impact -by enforcing proportional beneficial outcomes to users in different protected classes. In the recommender systems community, fairness has been studied with respect to both users' and items' memberships in protected classes defined … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
4
2

Relationship

2
8

Authors

Journals

citations
Cited by 57 publications
(63 citation statements)
references
References 76 publications
0
63
0
Order By: Relevance
“…As a consequence, there is no such notion of groups. MADR (Mean Average Difference-Ranking) and GCE (General Cross-Entropy) [9] are also measures applicable to situations where a sensitive attribute is defined, and recommendation results are evaluated according to it. These are also not applicable in our context.…”
Section: Fairness In Recommendationsmentioning
confidence: 99%
“…As a consequence, there is no such notion of groups. MADR (Mean Average Difference-Ranking) and GCE (General Cross-Entropy) [9] are also measures applicable to situations where a sensitive attribute is defined, and recommendation results are evaluated according to it. These are also not applicable in our context.…”
Section: Fairness In Recommendationsmentioning
confidence: 99%
“…Studies also show empirically that popular recommendation algorithms work better for males since many datasets are male-user-dominated [13]. One way to measure gender and age fairness of different recommendation models is based on generalized cross entropy (GCE) [10,11]; specifically, this work shows that a simple popularity-based algorithm provides better recommendations to male users and younger users, while on the opposite side, uniform random recommendations and collaborative filtering algorithms provide better recommendations to female users and older users [11]. Lin et al [29] study how different recommendation algorithms change the preferences for specific item categories (e.g., Action vs.…”
Section: Fairness/bias In Recommendation Systemsmentioning
confidence: 99%
“…According to their definition of pairwise fairness, assuming that two items have received the same user engagement, then both protected groups should have the same likelihood of a clicked item being ranked above another relevant unclicked item. Deldjoo et al [26] proposed a generalized cross entropy measure of fairness that was based on a fair distribution of a model's performance over items or users. Yao et al [27] propose fairness metrics so that the error is fairly distributed across users.…”
Section: Fair Recommender Systemsmentioning
confidence: 99%