2021
DOI: 10.1111/phc3.12760
|View full text |Cite
|
Sign up to set email alerts
|

Algorithmic bias: Senses, sources, solutions

Abstract: Data‐driven algorithms are widely used to make or assist decisions in sensitive domains, including healthcare, social services, education, hiring, and criminal justice. In various cases, such algorithms have preserved or even exacerbated biases against vulnerable communities, sparking a vibrant field of research focused on so‐called algorithmic biases. This research includes work on identification, diagnosis, and response to biases in algorithm‐based decision‐making. This paper aims to facilitate the applicati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
49
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 92 publications
(50 citation statements)
references
References 66 publications
(76 reference statements)
0
49
0
1
Order By: Relevance
“…Here we ask what fairness-related harms relate to foundation models, what sources are responsible for these harms, and how we can intervene to address them. The issues we discuss here are related to broader questions of algorithmic fairness and AI ethics [Corbett-Davies and Goel 2018;Chouldechova and Roth 2020;Hellman 2020;Fazelpour and Danks 2021], race and technology [Benjamin 2019;Gebru 2021;Field et al 2021], and the coexistence of society and technology [Abebe et al 2020].…”
Section: Introductionmentioning
confidence: 99%
“…Here we ask what fairness-related harms relate to foundation models, what sources are responsible for these harms, and how we can intervene to address them. The issues we discuss here are related to broader questions of algorithmic fairness and AI ethics [Corbett-Davies and Goel 2018;Chouldechova and Roth 2020;Hellman 2020;Fazelpour and Danks 2021], race and technology [Benjamin 2019;Gebru 2021;Field et al 2021], and the coexistence of society and technology [Abebe et al 2020].…”
Section: Introductionmentioning
confidence: 99%
“…Even so, it is usual for most researchers to recognize how biases can play a role in AI because they are trained upon human data, that there ought to be debiasing strategies and initiatives for more responsible AI, and there might be conflicting notions of fairness (see Ras et al, 2018 ; Zhang and Bareinboim, 2018 ; Fernández and Fernández, 2019 , p. 22, Kirchner and Larrus, 2019 , p. 5; The Royal Society, 2019 , p. 10; Kantarci, 2021 ; Mehrabi et al, 2021 ). Fazelpour and Danks ( 2021 ) also substantively explain how the use of predictive algorithms can preserve or even compound existing injustices, and “fairness through unawareness” almost never succeeds so that “algorithmic bias is not a purely mathematical problem” and requires engagement with “the messy complexities of the real world”.…”
Section: Economics and Ai Reconsideredmentioning
confidence: 99%
“…The study of injustices that may result from the deployment of machine learning systems has largely focused on the risks of algorithmic bias [10], paying special attention to classification tasks that may inform the allocations of benefits and burdens [28,58]. In such contexts, researchers have emphasized the risks of compounding inequalities [25] and historical injustices [40] in domains such as criminal justice [3], human resources [18], and healthcare [62].…”
Section: Machine Learning and Algorithmic Injusticesmentioning
confidence: 99%