2022
DOI: 10.1145/3492853
|View full text |Cite
|
Sign up to set email alerts
|

Studying Up Machine Learning Data

Abstract: Research in machine learning (ML) has argued that models trained on incomplete or biased datasets can lead to discriminatory outputs. In this commentary, we propose moving the research focus beyond bias-oriented framings by adopting a power-aware perspective to "study up" ML datasets. This means accounting for historical inequities, labor conditions, and epistemological standpoints inscribed in data. We draw on HCI and CSCW work to support our argument, critically analyze previous research, and point at two co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(17 citation statements)
references
References 63 publications
(50 reference statements)
1
16
0
Order By: Relevance
“…The emerging field of algorithmic reparations aims to address such issues of algorithmic fairness and discrimination by developing new methods that consider the structural conditions of oppression and inequality. This is in line with work that advocates for moving away from the narrow idea of “bias” toward a more robust conceptual, computational, and historical modeling of “power” in algorithms and machine learning (D’Ignazio and Klein, 2020; Miceli et al, 2022). Davis et al (2021) suggest that a reparative approach using algorithms can contribute to redressing past harms by utilizing the principles of intersectionality and reparations.…”
Section: Algorithmic Reparationsupporting
confidence: 80%
“…The emerging field of algorithmic reparations aims to address such issues of algorithmic fairness and discrimination by developing new methods that consider the structural conditions of oppression and inequality. This is in line with work that advocates for moving away from the narrow idea of “bias” toward a more robust conceptual, computational, and historical modeling of “power” in algorithms and machine learning (D’Ignazio and Klein, 2020; Miceli et al, 2022). Davis et al (2021) suggest that a reparative approach using algorithms can contribute to redressing past harms by utilizing the principles of intersectionality and reparations.…”
Section: Algorithmic Reparationsupporting
confidence: 80%
“…Older adults' interests are thus far not well represented in the larger AI and data privacy policy discourse (Stypińska, 2021; WHO, 2022). It is critical that these interests be surfaced and represented given the diverse values, demand for data, its commercialization, and the range of harms that have been identified among other marginalized communities (Green, 2021; Greene et al, 2019; Hoffmann, 2019; Miceli et al, 2022). Optional comments offered by 38% of our participants provide some additional insight into concerns about artificial companion robots that are largely consistent with those expressed by gerontechnologists and geriatric care professionals (Berridge et al, 2021; Wangmo et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…When machine learning systems use such data sets, biases which are very difficult to detect, may occur. The end result may be that certain groups suffer discrimination (Miceli et al, 2022). It should be the goal of AI and Law researchers to minimize the manner in which machine learning uses “compromised data” to make automated decisions that are discriminatory.…”
Section: Problems Related To the Use Of Machine Learning In Making Le...mentioning
confidence: 99%