2019
DOI: 10.1145/3358955.3363293
|View full text |Cite
|
Sign up to set email alerts
|

The Effects of Mixing Machine Learning and Human Judgment

Abstract: Based on the theoretical findings from the existing literature, some policymakers and software engineers contend that algorithmic risk assessments such as the COMPAS software can alleviate the incarceration epidemic and the occurrence of violent crimes by informing and improving decisions about policing, treatment, and sentencing. Considered in tandem, these findings indicate that collaboration between humans and machines does not necessarily lead to better outcomes, and human supervision does not sufficiently… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 16 publications
0
1
0
Order By: Relevance
“…The current interest in XAI is ostensibly and programmatically motivated by the need to make AI systems more transparent, understandable, and thus usable. However, in light of some empirically grounded findings, presented, among others, in the literature on naturalistic decision making [63,64], this interest appears to be more instrumental to the rising prevalence and diffusion of automated decision making (ADM) systems, especially when their use is anticipated in contexts for which the main legislative frameworks (e.g., the EU GDPR) require these systems to also provide reasons for their output whenever this latter can have legal effects (see, e.g., the debate around ADM and human-in-theloop decision making in the risk assessment domain [19,65,66], which uses ADM. This addresses a requirement for justification, rather than explanation, although these two concepts are often conflated (for a line of reasoning on the difference between explanation and justification, see [18]).…”
Section: Discussionmentioning
confidence: 99%
“…The current interest in XAI is ostensibly and programmatically motivated by the need to make AI systems more transparent, understandable, and thus usable. However, in light of some empirically grounded findings, presented, among others, in the literature on naturalistic decision making [63,64], this interest appears to be more instrumental to the rising prevalence and diffusion of automated decision making (ADM) systems, especially when their use is anticipated in contexts for which the main legislative frameworks (e.g., the EU GDPR) require these systems to also provide reasons for their output whenever this latter can have legal effects (see, e.g., the debate around ADM and human-in-theloop decision making in the risk assessment domain [19,65,66], which uses ADM. This addresses a requirement for justification, rather than explanation, although these two concepts are often conflated (for a line of reasoning on the difference between explanation and justification, see [18]).…”
Section: Discussionmentioning
confidence: 99%
“…A rising number of studies are further attempting to understand how risk assessment algorithms affect different criminal justice outcomes such as pre-trial release, recidivism rates, etc. [50][51][52]131]. Algorithmic crime mapping is the application of contemporary information processing technologies that merge geographic information system (GIS) data, digital maps, and crime data to gain deeper insights into the propagation of criminal activity [57].…”
Section: Ai In Policing and Algorithmic Crime Mappingmentioning
confidence: 99%