2023
DOI: 10.1017/dap.2023.2
|View full text |Cite
|
Sign up to set email alerts
|

Explainable machine learning for public policy: Use cases, gaps, and research directions

Abstract: Explainability is highly desired in machine learning (ML) systems supporting high-stakes policy decisions in areas such as health, criminal justice, education, and employment. While the field of explainable ML has expanded in recent years, much of this work has not taken real-world needs into account. A majority of proposed methods are designed with generic explainability goals without well-defined use cases or intended end users and evaluated on simplified tasks, benchmark problems/datasets, or with proxy use… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(6 citation statements)
references
References 75 publications
0
6
0
Order By: Relevance
“…Future work also should verify the resulting ML algorithm and is explainability methods with actual physician and clinician as a key component of the research. Although a rigorous validating method was proposed by Amarasinghe et al [78], there are currently few studies that fully utilise this method [78].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Future work also should verify the resulting ML algorithm and is explainability methods with actual physician and clinician as a key component of the research. Although a rigorous validating method was proposed by Amarasinghe et al [78], there are currently few studies that fully utilise this method [78].…”
Section: Discussionmentioning
confidence: 99%
“…Amarasinghe et al [78] proposed a framework to quantify the effectiveness of explainability method to clinician. This method involves a series of survey on how clinician's opinion changed with and without explainability.…”
Section: Discussionmentioning
confidence: 99%
“…Even the very concept of what constitutes a good explanation is still under debate currently. Recent works on explainable ML methods for policy development (Amarasinghe et al, 2023) underline the importance of contextualizing ML explanations and highlight the limitations of existing XAI techniques. Furthermore, they highlight the importance of stakeholder's engagement and the need to prioritize policymakers' requirements rather than relying on technology experts to produce explanations for ML-based policies (Bell et al, 2023).…”
Section: The Challenges Of Ml-based Policy Developmentmentioning
confidence: 99%
“…The existing literature and taxonomies on ADS transparency have identified a number of important stakeholders, which include technologists, policymakers, auditors, regulators, humans-in-the-loop, and those individuals affected by the output of the ADS (Meyers et al, 2007;Amarasinghe et al, 2020;Meske e12-8 Andrew Bell et al et al, 2020). While there is some overlap in how these stakeholders may think about transparency, in general, there is no single approach to designing transparent systems for these disparate stakeholder groups, and each of them has their own goals and purposes for wanting to understand an ADS (Sokol and Flach, 2020).…”
Section: Stakeholdersmentioning
confidence: 99%