2021
DOI: 10.1007/s10676-021-09608-9
|View full text |Cite
|
Sign up to set email alerts
|

Non-empirical problems in fair machine learning

Abstract: The problem of fair machine learning has drawn much attention over the last few years and the bulk of offered solutions are, in principle, empirical. However, algorithmic fairness also raises important conceptual issues that would fail to be addressed if one relies entirely on empirical considerations. Herein, I will argue that the current debate has developed an empirical framework that has brought important contributions to the development of algorithmic decision-making, such as new techniques to discover an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…Likewise, Mittelstadt et al (2023) pointed out how "the majority of measures and methods to mitigate bias and improve fairness in algorithmic systems have been built in isolation from policy and civil societal contexts and lack serious engagement with philosophical, political, legal, and economic theories of equality and distributive justice", and proposed to address future discussion more towards substantive equality of opportunities and away from strict egalitarianism by default. The issue of engineering fairness is, without doubts, challenging (Scantamburlo, 2021), and likely to require domain-specific approaches Chen et al, 2023b) and the ability to distinguish whether and when to use AI (Lin et al, 2020), or how to enhance and extend human capabilities with AI (human-centered AI) (Xu, 2019;Garibay et al, 2023). A paradigmatic case is presented in Silberzahn and Uhlmann (2015), where 29 teams of researchers approached the same research question (about football players' skin colour and red cards) on the same dataset with a wide array of analytical techniques, and obtaining highly varied results.…”
Section: Introducing Fair-aimentioning
confidence: 99%
“…Likewise, Mittelstadt et al (2023) pointed out how "the majority of measures and methods to mitigate bias and improve fairness in algorithmic systems have been built in isolation from policy and civil societal contexts and lack serious engagement with philosophical, political, legal, and economic theories of equality and distributive justice", and proposed to address future discussion more towards substantive equality of opportunities and away from strict egalitarianism by default. The issue of engineering fairness is, without doubts, challenging (Scantamburlo, 2021), and likely to require domain-specific approaches Chen et al, 2023b) and the ability to distinguish whether and when to use AI (Lin et al, 2020), or how to enhance and extend human capabilities with AI (human-centered AI) (Xu, 2019;Garibay et al, 2023). A paradigmatic case is presented in Silberzahn and Uhlmann (2015), where 29 teams of researchers approached the same research question (about football players' skin colour and red cards) on the same dataset with a wide array of analytical techniques, and obtaining highly varied results.…”
Section: Introducing Fair-aimentioning
confidence: 99%