CHI Conference on Human Factors in Computing Systems 2022
DOI: 10.1145/3491102.3517439
|View full text |Cite
|
Sign up to set email alerts
|

Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support

Abstract: AI-based decision support tools (ADS) are increasingly used to augment human decision-making in high-stakes, social contexts. As public sector agencies begin to adopt ADS, it is critical that we understand workers' experiences with these systems in practice. In this paper, we present findings from a series of interviews and contextual inquiries at a child welfare agency, to understand how they currently make AI-assisted child maltreatment screening decisions. Overall, we observe how workers' reliance upon the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
75
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 74 publications
(77 citation statements)
references
References 50 publications
(104 reference statements)
1
75
0
1
Order By: Relevance
“…Against predictive algorithms in CPS. Our participants gave more novel suggestions and critical feedback than in prior participatory work with impacted communities and workers in CPS [20,26,27,67,68,115]. For example, Brown et al [20] suggest their participants' "general distrust in the existing system" (which they somewhat vaguely describe as "system-level concerns") led to "low comfort in algorithmic decision-making," and suggested these problems could be improved through "greater transparency and improved communication strategies. "…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…Against predictive algorithms in CPS. Our participants gave more novel suggestions and critical feedback than in prior participatory work with impacted communities and workers in CPS [20,26,27,67,68,115]. For example, Brown et al [20] suggest their participants' "general distrust in the existing system" (which they somewhat vaguely describe as "system-level concerns") led to "low comfort in algorithmic decision-making," and suggested these problems could be improved through "greater transparency and improved communication strategies. "…”
Section: Discussionmentioning
confidence: 99%
“…While more common across HCI and CSCW, less work in participatory ML empowers stakeholders to decide on the "scope and purpose for AI, including whether it should be built or not" [36]. Specifically around the design of algorithms in child welfare, 2 prior participatory work has either collaborated with government agencies or solely engaged with government workers in their studies [20,26,67,68,115]. 3 Most similar to our work, Brown et al [20] partnered with a CPS agency to aid the development of a PRM by conducting participatory design workshops where they asked workers and community stakeholders about scenarios related to specific design choices.…”
Section: Participatory Algorithm Designmentioning
confidence: 99%
See 3 more Smart Citations