Proceedings of the Web Conference 2021 2021
DOI: 10.1145/3442381.3449901
|View full text |Cite
|
Sign up to set email alerts
|

Maximizing Marginal Fairness for Dynamic Learning to Rank

Abstract: Rankings, especially those in search and recommendation systems, often determine how people access information and how information is exposed to people. Therefore, how to balance the relevance and fairness of information exposure is considered as one of the key problems for modern IR systems. As conventional ranking frameworks that myopically sorts documents with their relevance will inevitably introduce unfair result exposure, recent studies on ranking fairness mostly focus on dynamic ranking paradigms where … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(10 citation statements)
references
References 34 publications
0
10
0
Order By: Relevance
“…Secondly, in-processing methods aim at encoding fairness as part of the objective function, typically as a regularizer [1,4]. Finally, post-processing methods tend to modify the presentations of the results, e.g., re-ranking through linear programming [25,43,53] or multi-armed bandit [5]. However, there is no free lunch, imposing fairness constraints to the main learning task introduces a trade-off between these objectives, which have been asserted in several studies [22,23,32,55], e.g., Dutta et al [12] showed that because of noise on the underrepresented groups the trade-off between accuracy and equality of opportunity exists.…”
Section: Related Work 21 Fairness In Recommendationmentioning
confidence: 99%
“…Secondly, in-processing methods aim at encoding fairness as part of the objective function, typically as a regularizer [1,4]. Finally, post-processing methods tend to modify the presentations of the results, e.g., re-ranking through linear programming [25,43,53] or multi-armed bandit [5]. However, there is no free lunch, imposing fairness constraints to the main learning task introduces a trade-off between these objectives, which have been asserted in several studies [22,23,32,55], e.g., Dutta et al [12] showed that because of noise on the underrepresented groups the trade-off between accuracy and equality of opportunity exists.…”
Section: Related Work 21 Fairness In Recommendationmentioning
confidence: 99%
“…However, such scores are rarely available in real-world settings. For example, machine-learned models or other statistical techniques used to estimate relevance scores are often uncertain about the relevance of items due to various reasons, for example, biased or noisy feedback, the initial unavailability of data [119,164], and platform updates in dynamic settings [126]. While such estimation noise (or bias) is important for all algorithmic ranking or recommendations challenges, it is especially important to consider for fair ranking algorithms, as we illustrate below.…”
Section: Consequences Of Uncertaintymentioning
confidence: 99%
“…Secondly, in-processing methods aim at encoding fairness as part of the objective function, typically as a regularizer [1,6,24,41]. Finally, post-processing methods modify the presentation of the results, e.g., by re-ranking through linear programming [40,55,68] or multi-armed bandit [8]. Based on the characteristics of the recommender system itself, there also have been a few works related to multi-sided fairness in multi-stakeholder systems [7,22].…”
Section: Fairness In Recommendationmentioning
confidence: 99%