2022 ACM Conference on Fairness, Accountability, and Transparency 2022
DOI: 10.1145/3531146.3533132
|View full text |Cite
|
Sign up to set email alerts
|

Towards Intersectional Feminist and Participatory ML: A Case Study in Supporting Feminicide Counterdata Collection

Abstract: Data ethics and fairness have emerged as important areas of research in recent years. However, much work in this area focuses on retroactively auditing and "mitigating bias" in existing, potentially flawed systems, without interrogating the deeper structural inequalities underlying them. There are not yet examples of how to apply feminist and participatory methodologies from the start, to conceptualize and design machine learning-based tools that center and aim to challenge power inequalities. Our work targets… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 38 publications
(28 reference statements)
0
12
0
Order By: Relevance
“…For instance, test inputs may not be sufficiently representative of real-world settings [53,72], and performance metrics may not align with users' preferences and perceptions of ideal model performance [47,53,66]. To address this gap, a growing body of work in HCI aims to design performance evaluations grounded in downstream deployment contexts and the needs and goals of downstream stakeholders (e.g., [18,57,80,81]). This typically involves exploring users' domain-specific information needs [19,46], directly working with downstream stakeholders to collaboratively design evaluation datasets and metrics [80], and designing tools that allow users to specify their own test datasets and performance metrics [18,27,28,55,81].…”
Section: Designing Performance Evaluationsmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, test inputs may not be sufficiently representative of real-world settings [53,72], and performance metrics may not align with users' preferences and perceptions of ideal model performance [47,53,66]. To address this gap, a growing body of work in HCI aims to design performance evaluations grounded in downstream deployment contexts and the needs and goals of downstream stakeholders (e.g., [18,57,80,81]). This typically involves exploring users' domain-specific information needs [19,46], directly working with downstream stakeholders to collaboratively design evaluation datasets and metrics [80], and designing tools that allow users to specify their own test datasets and performance metrics [18,27,28,55,81].…”
Section: Designing Performance Evaluationsmentioning
confidence: 99%
“…To address this gap, a growing body of work in HCI aims to design performance evaluations grounded in downstream deployment contexts and the needs and goals of downstream stakeholders (e.g., [18,57,80,81]). This typically involves exploring users' domain-specific information needs [19,46], directly working with downstream stakeholders to collaboratively design evaluation datasets and metrics [80], and designing tools that allow users to specify their own test datasets and performance metrics [18,27,28,55,81]. This "participatory turn" [26] in the design of performance evaluations highlights the importance and strength of centering the experiential and domain expertise of stakeholders in downstream deployment contexts.…”
Section: Designing Performance Evaluationsmentioning
confidence: 99%
“…Further into the development process, brokers were able to open ML systems to feedback from participants during some stages more than others-e.g., during data collection and labelling. While data collection and labelling may be an opportunity for participants to learn about ML and have an impact on the shape of systems [60,72], the power to decide what categories exist in the world of ML and the correct categories for labelling rested with brokers [13]. Additionally, as noted by several brokers, the bounds of engagement with participants during data collection and labelling were constrained by the instrumental needs of the broader ML development project.…”
Section: " [P8]mentioning
confidence: 99%
“…This nascent field builds on a long history of participatory approaches to computing research and development and has emerged in response to examples of sub-par performance of ML systems for marginalised groups. Participatory approaches have been enacted across each stage of ML design and development-from problem formulation to model evaluation-and include collaborative approaches to construct datasets [62,73], design and validate ML algorithms [57,72], and guide advocacy for algorithmic accountability [47,63]. At the same time, several authors have raised concerns about "participation-washing" [70], cooptation of participatory work [7], and the limited evidence across Participatory ML projects of equitable partnerships with participants [24,26,33].…”
Section: Introductionmentioning
confidence: 99%
“…We decided against a field experiment given the intrusiveness into work it might require, and recognized that only a more open format could elicit the set of stakeholders, resources and tools workers might use to construct their imagined data institutions. Other methods of participatory design for resistance against algorithmic systems have been put forward [36,53]. Rather than adopt methods which would aim to audit or debias algorithms, a bottom-up approach addresses the collection, governance and support of data flows with the aim of challenging social relations of work.…”
Section: Developing Participatory Design Exercisesmentioning
confidence: 99%