2016
DOI: 10.1111/jels.12098
|View full text |Cite
|
Sign up to set email alerts
|

Forecasting Domestic Violence: A Machine Learning Approach to Help Inform Arraignment Decisions

Abstract: Arguably the most important decision at an arraignment is whether to release an offender until the date of his or her next scheduled court appearance. Under the Bail Reform Act of 1984, threats to public safety can be a key factor in that decision. Implicitly, a forecast of “future dangerousness” is required. In this article, we consider in particular whether usefully accurate forecasts of domestic violence can be obtained. We apply machine learning to data on over 28,000 arraignment cases from a major metropo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
71
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 107 publications
(75 citation statements)
references
References 25 publications
3
71
0
1
Order By: Relevance
“…Arrests for violent crimes are rare and difficult to predict. The results from the first column of Table are roughly consistent with past machine‐learning efforts to forecast crimes that are very troublesome but relatively uncommon (Berk ; Berk et al ). In this application, considerable success forecasting the absence of an arrest for a violent crime is to be expected because the base rate for violent crime is low (i.e., 0.10).…”
Section: Accuracy and Fairness In The Empirical Resultssupporting
confidence: 80%
“…Arrests for violent crimes are rare and difficult to predict. The results from the first column of Table are roughly consistent with past machine‐learning efforts to forecast crimes that are very troublesome but relatively uncommon (Berk ; Berk et al ). In this application, considerable success forecasting the absence of an arrest for a violent crime is to be expected because the base rate for violent crime is low (i.e., 0.10).…”
Section: Accuracy and Fairness In The Empirical Resultssupporting
confidence: 80%
“…This example is in contrast with the examples used by those proposing a principle of explicability for AI. ML used for medical diagnosis (de Bruijne 2016;Dhar and Ranganathan 2015;Erickson et al 2017), judicial sentencing (Berk et al 2016;Barry-Jester et al 2015), and predictive policing (Ahmed 2018;Ensign et al 2017;Joh 2017) are just a few of many realworld examples. Using the decisions of ML algorithms in these contexts without explanation is wrong, so the argument goes, unless that ML algorithm is explicable.…”
Section: Calls For a Principle Of Explicability For Aimentioning
confidence: 99%
“…Not only are risk practices expanding, they are continuing to evolve. Some recent risk assessment instruments are beginning to incorporate machine learning (Berk, 2008;Berk, Sorenson, & Barnes, 2016), and there are discussions of incorporating dynamic data sets-where the instruments train on new incoming data in real time (Rothschild-Elyassi et al, 2019). Related to this, criminal justice institutions are increasingly working with computer scientists and software engineers trained in big data analytics to develop new ways of thinking about and assessing risk (Hannah-Moffat, 2018; see also Danaher, Hogan, & Noone, 2017).…”
Section: Discussionmentioning
confidence: 99%