2021
DOI: 10.1017/can.2021.17
|View full text |Cite
|
Sign up to set email alerts
|

Proceed with Caution

Abstract: It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(10 citation statements)
references
References 36 publications
0
10
0
Order By: Relevance
“…At least since the publication of ProPublica`s assessment of the COMPAS algorithm (Angwin et al, 2016) -used by many states in the US to inform pre-trial decisions and the study on skin-type bias by Buolamwini and Gebru (2018), there have been growing concerns that the deployment of ML algorithms used to make/inform consequential decisions reinforces structural inequalities. In turn, philosophers, computer scientists, and researchers from cognate fields are increasingly paying attention to the mechanisms through which ML algorithms disadvantage certain social groups (Barocas et al, 2019;Fazelpour & Danks, 2021;Zimmermann & Lee-Stronach, 2021).…”
Section: Case Study: Mitigating Bias In Pain Diagnosismentioning
confidence: 99%
“…At least since the publication of ProPublica`s assessment of the COMPAS algorithm (Angwin et al, 2016) -used by many states in the US to inform pre-trial decisions and the study on skin-type bias by Buolamwini and Gebru (2018), there have been growing concerns that the deployment of ML algorithms used to make/inform consequential decisions reinforces structural inequalities. In turn, philosophers, computer scientists, and researchers from cognate fields are increasingly paying attention to the mechanisms through which ML algorithms disadvantage certain social groups (Barocas et al, 2019;Fazelpour & Danks, 2021;Zimmermann & Lee-Stronach, 2021).…”
Section: Case Study: Mitigating Bias In Pain Diagnosismentioning
confidence: 99%
“…While a thorough engagement with the substantial literature on the nuances of distributive justice is beyond the scope of this paper, it suffices to say that most fairness constraints and metrics are deeply rooted in the paradigm of distributive justice, broadly construed. Except for a few proposals for taking procedural fairness seriously [24,29,56], algorithmic fairness is firmly embedded in the distributive paradigm.…”
Section: Algorithmic Fairness: Distributive Approachmentioning
confidence: 99%
“…Hoffmann [29] draws heavily on legal studies in discussing the limits of the goods included in the prevailing paradigm of distributive justice in the literature. Zimmermann and Lee-Stronach [56] draw attention to the relevance of procedural justice to algorithmic systems and argue that reliance on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. The arguments presented in this paper are novel in that Grgić-Hlača et al [24] does not engage with the literature on structural injustice.…”
Section: Algorithmic Fairness and Structural Injusticementioning
confidence: 99%
See 1 more Smart Citation
“…The use of predictive machine learning algorithms (henceforth ML algorithms) to take decisions or inform a decision-making process in both public and private settings can already be observed and promises to be increasingly common. They are used to decide who should be promoted or fired, who should get a loan or an insurance premium (and at what cost), what publications appear on your social media feed [47,49] or even to map crime hot spots and to try and predict the risk of recidivism of past offenders [66]. 1 When compared to human decision-makers, ML algorithms could, at least theoretically, present certain advantages, especially when it comes to issues of discrimination.…”
Section: Introductionmentioning
confidence: 99%