2021
DOI: 10.1016/j.ipm.2021.102519
|View full text |Cite
|
Sign up to set email alerts
|

Bias-Aware Hierarchical Clustering for detecting the discriminated groups of users in recommendation systems

Abstract: One challenge for the modern recommendation systems is the Tyranny of Majority -the generated recommendations are often optimized for the mainstream trends so that the minority preference groups remain discriminated. Moreover, most modern recommendation techniques are characterized as black-box systems. Given a lack of understanding of the dataset characteristics and insufficient diversity of represented individuals, such approaches inevitably lead to amplifying hidden data biases and existing disparities. In … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(10 citation statements)
references
References 28 publications
0
10
0
Order By: Relevance
“…• Recommendation quality measures for groups: In a number of works on user-related fairness, the goal is to ensure that no group of users is discriminated by receiving recommendations of lower quality than another (privileged or majority) group. To evaluate this aspect, common accuracy measures can be applied and compared across groups, for example, NDCG [117,35] for user groups, or relevance [68] for item groups. • Recommendation quality measures for individuals: In works that aim at individual fairness-these are mostly works on group recommendation-a common goal often is to make recommendations that are acceptable for all group members, i.e., where the preferences of none of the group members is ignored.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…• Recommendation quality measures for groups: In a number of works on user-related fairness, the goal is to ensure that no group of users is discriminated by receiving recommendations of lower quality than another (privileged or majority) group. To evaluate this aspect, common accuracy measures can be applied and compared across groups, for example, NDCG [117,35] for user groups, or relevance [68] for item groups. • Recommendation quality measures for individuals: In works that aim at individual fairness-these are mostly works on group recommendation-a common goal often is to make recommendations that are acceptable for all group members, i.e., where the preferences of none of the group members is ignored.…”
Section: Methodsmentioning
confidence: 99%
“…In terms of fairness towards users, the technical proposal in [75] for example aims to serve users with recommendations that reflect their past diversity preferences with respect to movie genres. An approach towards fairness to groups is proposed in [117]. Here, groups are not identified by their protected attribute, but by the recommendation accuracy that is achieved (using any metric) for the members of the group.…”
Section: Application Domains and Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…To identify similarities by period, theme, or researcher, they cluster the works in the analysed collection using Euclidean distance similarity, Partitioning Around Medoids, and K-means. Misztal-Radecka and Indurkhya [48] describes a Bias-Aware Hierarchical Clustering algorithm to improve recommendation systems by identifying clusters of users with unsuitable recommendations. It is a variation of K-means where splitting depends on high biases instead of minimum variance.…”
Section: Related Workmentioning
confidence: 99%