Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 2019
DOI: 10.1145/3290605.3300830
|View full text |Cite
|
Sign up to set email alerts
|

Improving Fairness in Machine Learning Systems

Abstract: The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of realworld needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we co… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

7
449
0
10

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 571 publications
(496 citation statements)
references
References 57 publications
7
449
0
10
Order By: Relevance
“…Finally, it is crucial to develop ways to systematically evaluate collaborative interfaces and to investigate the implications of designing algorithmic interactions with humans [82,83]. While an interpretabilityfirst approach could assist in highlighting fairness and bias issues in data or models [34,38], it could also introduce unwanted biases by guiding the user towards what the model has learned [4]. It is thus insufficient to limit the evaluation of a system to measures of efficiency and accuracy.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, it is crucial to develop ways to systematically evaluate collaborative interfaces and to investigate the implications of designing algorithmic interactions with humans [82,83]. While an interpretabilityfirst approach could assist in highlighting fairness and bias issues in data or models [34,38], it could also introduce unwanted biases by guiding the user towards what the model has learned [4]. It is thus insufficient to limit the evaluation of a system to measures of efficiency and accuracy.…”
Section: Discussionmentioning
confidence: 99%
“…The first action of the fairness pipeline is to clearly understand the machine learning process and its consequences to fairness in decision making. The challenge is how to facilitate such an understanding as many practitioners do not fully recognize how every step in the process could potentially lead to biased decision making [21]. To address this, we propose that a fair decision-making tool should take proactive action to help decision makers understand the possible unfairness at each machine learning stage, by providing an overview with a step-bystep workflow to guide users to examine different notions of fairness.…”
Section: Understandmentioning
confidence: 99%
“…There has been development of semi-automated tools to help practitioners detect subgroup biases [2,3,4,7,9,20,21,22]. However, these tools are often designed in isolation from users [18]. With an insufficient understanding of users, they may be divorced from user needs and expectations [19].…”
Section: Background and Related Workmentioning
confidence: 99%
“…Using demographic information for model auditing, however, often raises privacy issues. Coarse-grained demographic information is often less sensitive and could be employed for bias analysis [18].…”
Section: Design Considerationsmentioning
confidence: 99%
See 1 more Smart Citation