2021
DOI: 10.52214/cjrl.v11i4.8741
|View full text |Cite
|
Sign up to set email alerts
|

Calculating the Souls of Black Folk

Abstract: In 1995, there were nearly 50,000 children removed from their families into the New York City Administration for Children’s Services’ (ACS) foster care system.1 The NYC ACS’ forcible transfer of children from a protected group into another group may amount to genocide under Article 2(e) of the Genocide Convention if formal review can demonstrate an “intent to destroy” the group “as such” or at least “in part.” Rather than pursuing a citizen’s tribunal, or truth and reconciliation committee to assess the histor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 5 publications
0
5
0
Order By: Relevance
“…The domains where agencies are attempting to apply AI are often highly socially complex and high-stakes-including tasks like screening child maltreatment reports [58], allocating housing to unhoused people [35], predicting criminal activity [32], or prioritizing medical care for patients [45]. In these domains, where some public sector agencies have a fraught history of interactions with marginalized communities [4,54], it has proven to be particularly challenging to design AI systems that avoid further perpetuating social biases [10], obfuscating how decisions are made [31], or relying on inappropriate quantitative notions of what it means to make accurate decisions [13]. Public sector agencies are increasingly under fire for implementing AI tools that fail to bring value to the communities they serve, contributing to a common trend: AI tools are implemented then discarded after failing in practice [21,56,67,68].…”
Section: Background 21 Public Sector Ai and Overcoming Ai Failuresmentioning
confidence: 99%
See 1 more Smart Citation
“…The domains where agencies are attempting to apply AI are often highly socially complex and high-stakes-including tasks like screening child maltreatment reports [58], allocating housing to unhoused people [35], predicting criminal activity [32], or prioritizing medical care for patients [45]. In these domains, where some public sector agencies have a fraught history of interactions with marginalized communities [4,54], it has proven to be particularly challenging to design AI systems that avoid further perpetuating social biases [10], obfuscating how decisions are made [31], or relying on inappropriate quantitative notions of what it means to make accurate decisions [13]. Public sector agencies are increasingly under fire for implementing AI tools that fail to bring value to the communities they serve, contributing to a common trend: AI tools are implemented then discarded after failing in practice [21,56,67,68].…”
Section: Background 21 Public Sector Ai and Overcoming Ai Failuresmentioning
confidence: 99%
“…The deliberation questions focus on promoting conversations that bridge reflection and understanding of the goals of the proposed AI tool, as well as how these goals will be operationalized into measurable outcomes. The 52 questions within the Goals and Intended Use section are divided into nine subsections: (1) Who the tool impacts and serves, (2) Intended use, (3) How agencyexternal stakeholders should be involved in determining goals, (4) Differences in goals between the agency and impacted community members, (5) Envisioned harms and benefits, (6) Impacts of outcome choice, (7) Measuring improvement based on outcomes, (8) Centering community needs, and (9) Worker perceptions. For the purpose of this paper, we sample one question from each topical subsection.…”
Section: Goals and Intended Usementioning
confidence: 99%
“…For example, while public health projects are often represented as governmental care for the population, sharing COVID-19 contact tracing data with US law enforcement simultaneously conscripts police into public health infrastructure and threatens community trust in public health institutions (Molldrem, Hussain, and McClelland 2021). This demonstrates how the innate possibility for violence contained in care (see Abdurahman 2021;Murphy 2015;Razack 2013) can be actualized through the substitution or expansion of caring actors even as data collection and use policies remain otherwise the same. Examples proliferate: in Israel, phone-based contact tracing technology has been used to accuse Palestinians of participating in violent acts based on their location; and human-rights activists have reported COVID-19 technologies and policies provide cover for controlling the movements of activists in India and China (Burke et al 2022).…”
Section: Conclusion: Caring With Through and About Health Surveillancementioning
confidence: 99%
“…Care scholarship also directed our attention to the way big data practices of health surveillance promise care, and how those concerned with technology ethics, design justice, and liberatory projects ought to orient toward those processes (Taylor 2020;Müller and Kenney 2014;Puig de la Bellacasa 2011). Finally, we recognize that care is often capable of reproducing violence, and an attention to care's complexity demands that we resist attempts to understand these cruel tendencies as something "other than" or outside of care (Abdurahman 2021;Murphy 2015;Razack 2013). This account tracks care from the standpoint of marginalized campus informants, identifying tensions between their practices of care and those enforced and assumed by institutional surveillance.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, practitioners in foster care contend with heavy caseloads and limited resources due to austerity measures and the financialization of social services (Abramovitz & Zelnick, 2015). Given these challenges, there is increasing pressure to leverage technological solutions and evidence-based practices (Abdurahman, 2021). However, early attempts to apply machine learning risk assessment tools to child welfare screening have encountered substantial roadblocks (Chouldechova et al, 2018).…”
Section: Introductionmentioning
confidence: 99%