2024
DOI: 10.1145/3641276
|View full text |Cite
|
Sign up to set email alerts
|

Should Fairness be a Metric or a Model? A Model-based Framework for Assessing Bias in Machine Learning Pipelines

John P. Lalor,
Ahmed Abbasi,
Kezia Oketch
et al.

Abstract: Fairness measurement is crucial for assessing algorithmic bias in various types of machine learning (ML) models, including ones used for search relevance, recommendation, personalization, talent analytics, and natural language processing. However, the fairness measurement paradigm is currently dominated by fairness metrics that examine disparities in allocation and/or prediction error as univariate key performance indicators (KPIs) for a protected attribute or group. Although important and effective in assessi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 79 publications
0
2
0
Order By: Relevance
“…To address concerns about algorithmic fairness and bias in personalized recommendation systems, recent research has focused on developing fairness-aware models and evaluation metrics (Gao et al, 2022;Lalor et al, 2024;Zhang et al, 2023). These studies aim to mitigate bias and ensure equitable treatment across diverse user groups, promoting diversity and inclusivity in personalized recommendations.…”
Section: Personalized Information Access and User Profile Modeling: R...mentioning
confidence: 99%
“…To address concerns about algorithmic fairness and bias in personalized recommendation systems, recent research has focused on developing fairness-aware models and evaluation metrics (Gao et al, 2022;Lalor et al, 2024;Zhang et al, 2023). These studies aim to mitigate bias and ensure equitable treatment across diverse user groups, promoting diversity and inclusivity in personalized recommendations.…”
Section: Personalized Information Access and User Profile Modeling: R...mentioning
confidence: 99%
“…Contrary to the implicit assumptions underlying various formal models of information seeking, it is posited that users exhibit bounded rationality, and typically do not base their search decisions on precise estimations of search gains and costs [2,55]. Despite the increasing attention paid to data biases and algorithm biases in the the field of computing [13,44,47], only a few studies have focused on the issue of properly recognizing and effectively mitigating the negative effects of biases on human decision-making processes. Within the realm of cognitive biases influencing decision-making, the decoy effect reflects how users alter their in-situ preferences and judgments on presented options.…”
Section: Discussionmentioning
confidence: 99%