2021
DOI: 10.48550/arxiv.2103.06172
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems

Abstract: Many technical approaches have been proposed for ensuring that decisions made by machine learning systems are fair, but few of these proposals have been stress-tested in real-world systems. This paper presents an example of one team's approach to the challenge of applying algorithmic fairness approaches to complex production systems within the context of a large technology company. We discuss how we disentangle normative questions of product and policy design (like, "how should the system trade off between dif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 48 publications
0
16
0
Order By: Relevance
“…A key consequence of the analysis presented thus far is that, subject to the assumptions detailed in section 2.3, the optimal threshold rule applied to a predictive model that outputs a continuous-valued risk score is based directly on the calibration characteristics of the model and the assumed expected costs or utilities of classification errors that encapsulate the effectiveness of the intervention and the preferences for downstream benefits and harms. As has been argued in related work [15,18,31,50], it follows that if the model is calibrated for each subgroup, the decision threshold that maximizes expected utility and net benefit for each subgroup is the same when the expected utilities associated with each classification error do not change across subgroups. We verify this claim in simulation in supplementary section A1 (Supplementary Figure A1).…”
Section: Implications For Algorithmic Fairnessmentioning
confidence: 88%
See 2 more Smart Citations
“…A key consequence of the analysis presented thus far is that, subject to the assumptions detailed in section 2.3, the optimal threshold rule applied to a predictive model that outputs a continuous-valued risk score is based directly on the calibration characteristics of the model and the assumed expected costs or utilities of classification errors that encapsulate the effectiveness of the intervention and the preferences for downstream benefits and harms. As has been argued in related work [15,18,31,50], it follows that if the model is calibrated for each subgroup, the decision threshold that maximizes expected utility and net benefit for each subgroup is the same when the expected utilities associated with each classification error do not change across subgroups. We verify this claim in simulation in supplementary section A1 (Supplementary Figure A1).…”
Section: Implications For Algorithmic Fairnessmentioning
confidence: 88%
“…To define the full MMD, each of the three expectations in equation ( 30) are replaced with weighted variants analogous to equation (31). Now, we consider the operationalization of equation ( 9) to penalize differences in model performance metrics.…”
Section: Supplementary Materials a Supplementary Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is therefore difficult to, as some have claimed, "disentangle normative questions of product design... from empirical questions of system implementation" [5], given the ways in which practitioners' work practices are shaped by the organizational contexts in which they are embedded [e.g., 44,47,63].…”
Section: Fairness Work In Organizational Contextsmentioning
confidence: 99%
“…In the interest of reflexivity [16,43], we acknowledge that our perspectives and approaches to research are shaped by our own experiences and positionality. Specifically, we are researchers living and working in the U.S., primarily working in industry 5 with years of experience working closely with AI practitioners on projects related to the fairness of AI systems. In addition, we come from a mix of disciplinary backgrounds, including AI and HCI, which we have drawn on to conduct prior research into sociotechnical approaches to identifying, assessing, and mitigating fairness-related harms caused by AI systems.…”
Section: Positionalitymentioning
confidence: 99%