Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 2023
DOI: 10.1145/3544548.3581227
|View full text |Cite
|
Sign up to set email alerts
|

Fairness Evaluation in Text Classification: Machine Learning Practitioner Perspectives of Individual and Group Fairness

Abstract: Mitigating algorithmic bias is a critical task in the development and deployment of machine learning models. While several toolkits exist to aid machine learning practitioners in addressing fairness issues, little is known about the strategies practitioners employ to evaluate model fairness and what factors influence their assessment, particularly in the context of text classification. Two common approaches of evaluating the fairness of a model are group fairness and individual fairness. We run a study with Ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(1 citation statement)
references
References 57 publications
(78 reference statements)
0
1
0
Order By: Relevance
“…A number of studies addressed the question of what practitioners need from responsible AI by asking them directly. ML practitioners call for context-specific tools [28] or ways of making contextual information evident in existing tools [3]. They ask for tools that fit into their resource constraints, as ML practices outside of big tech may not have the bandwidth to carry out some of the responsible AI investigations of larger companies [29].…”
Section: Related Work 21 Current Responsible Ai and Its Shortcomingsmentioning
confidence: 99%
“…A number of studies addressed the question of what practitioners need from responsible AI by asking them directly. ML practitioners call for context-specific tools [28] or ways of making contextual information evident in existing tools [3]. They ask for tools that fit into their resource constraints, as ML practices outside of big tech may not have the bandwidth to carry out some of the responsible AI investigations of larger companies [29].…”
Section: Related Work 21 Current Responsible Ai and Its Shortcomingsmentioning
confidence: 99%