2019
DOI: 10.1080/1369118x.2019.1573912
|View full text |Cite
|
Sign up to set email alerts
|

Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
194
0
7

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 338 publications
(202 citation statements)
references
References 34 publications
1
194
0
7
Order By: Relevance
“…Aside from these problems, the very framing of Fair-ML can also be criticised on the grounds that it centres the decision-maker and assumes the legitimacy of their power to make decisions based on algorithms, including choosing which contestable assumptions to incorporate into them [27,40]. In many cases such legitimacy is rightly challenged.…”
Section: On the Legitimacy Of Decision-makers Normative Assumptionsmentioning
confidence: 99%
“…Aside from these problems, the very framing of Fair-ML can also be criticised on the grounds that it centres the decision-maker and assumes the legitimacy of their power to make decisions based on algorithms, including choosing which contestable assumptions to incorporate into them [27,40]. In many cases such legitimacy is rightly challenged.…”
Section: On the Legitimacy Of Decision-makers Normative Assumptionsmentioning
confidence: 99%
“…Benchmark dataset curation frequently involves supplementing or highlighting data from a specific population that is underrepresented in previous datasets. Efforts to increase representation of this group can lead to tokenism and exploitation, compromise privacy, and perpetuate marginalization through population monitoring and targeted violence [22,25,35]. And the method through which companies pursue better representation can be ethically questionable.…”
Section: Tension 1: Privacy and Representationmentioning
confidence: 99%
“…how systems of power and oppression give rise to qualitatively different experiences for individuals holding multiply marginalized identities [25].…”
Section: Tensionmentioning
confidence: 99%
“…This component illustrated a single dimension of algorithmic harmerroneous results. Conversations with our partner organizations and participants emphasized that while such demonstrations are important, they risk promoting "single axis thinking" [32] about algorithmic systems in which a focus on technical errors in a problematic technology diverts attention from the social systems that produce both the technology's inequitable effects and the narratives that justify its use. Surveillance systems that work as intended still produce undesirable effects and reproduce patterns of discrimination [35].…”
Section: Non-technical Measures Are Powerful Steps Toward Algorithmicmentioning
confidence: 99%