2020
DOI: 10.48550/arxiv.2005.04176
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction

Abstract: Our goal is to study the predictive performance, interpretability, and fairness of machine learning models for pretrial recidivism prediction. Machine learning methods are known for their ability to automatically generate high-performance models (that sometimes even surpass human performance) from data alone. However, many of the most common machine learning approaches produce "black-box" models-models that perform well, but are too complicated for humans to understand. "Interpretable" machine learning techniq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 36 publications
0
3
0
Order By: Relevance
“…Over the past decade, the field of XAI has witnessed numerous studies being conducted to measure the efficacy of various explanation methods for increasing the transparency of ML systems across multiple application domains, such as healthcare [8,16,18,60], finance [9,13,24], and law enforcement [72,79,87]. Along with making black-box ML models more transparent, XAI methods have also aimed to make these systems more understandable and trustworthy [8,49,54].…”
Section: Xai Methods For ML Systemsmentioning
confidence: 99%
“…Over the past decade, the field of XAI has witnessed numerous studies being conducted to measure the efficacy of various explanation methods for increasing the transparency of ML systems across multiple application domains, such as healthcare [8,16,18,60], finance [9,13,24], and law enforcement [72,79,87]. Along with making black-box ML models more transparent, XAI methods have also aimed to make these systems more understandable and trustworthy [8,49,54].…”
Section: Xai Methods For ML Systemsmentioning
confidence: 99%
“…Post-processing solutions to standard explanation model training could also prove effective, similar to recent work in the space of improving worst-case generalization [76]. However, such solutions need to be appraised carefully to ensure that the resulting models are both fair and remain interpretable to users [110].…”
Section: Implications Of Fidelity Gapsmentioning
confidence: 98%
“…The utilisation of Artificial Intelligence (AI) systems has grown significantly in the past few years across diverse domains such as medical [13,14,37], finance [10,12,15], legal [44,51,54] and others [2,6,36]. Despite the success of AI systems across various applications, the "black-box" nature of AI models has raised several concerns related to lack of transparency [6,29,34] and appropriate trust [27,47], particularly when predicted outcomes are biased, unfair, incorrect or misguiding [11,22,33].…”
Section: Research Background and Motivationmentioning
confidence: 99%