2021
DOI: 10.31234/osf.io/9pqez
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Stakeholder Playbook for Explaining AI Systems

Abstract: The purpose of the Stakeholder Playbook is to enable system developers to take into account the different ways in which stakeholders need to "look inside" of the AI/XAI systems. Recent work on Explainable AI has mapped stakeholder categories onto explanation requirements. While most of these mappings seem reasonable, they have been largely speculative. We investigated these matters empirically. We conducted interviews with senior and mid-career professionals possessing post-graduate degrees who had experience… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…For example, stakeholders sometimes express a need to be able to trust vendors; they express a need to get explanations from trusted systems engineers. Users need to understand whether they can trust the data that were used to train an AI system (see Hoffman et al, 2022). The measurement scale presented in this article is focused specifically on the end-user's trust in machine-generated explanations.…”
Section: Measuring Trust In the Xai Contextmentioning
confidence: 99%
“…For example, stakeholders sometimes express a need to be able to trust vendors; they express a need to get explanations from trusted systems engineers. Users need to understand whether they can trust the data that were used to train an AI system (see Hoffman et al, 2022). The measurement scale presented in this article is focused specifically on the end-user's trust in machine-generated explanations.…”
Section: Measuring Trust In the Xai Contextmentioning
confidence: 99%
“…In an interview with stakeholders, one of them said that if he could not achieve an understanding of how an AI system works within 10 trials or attempts, then he simply would not use it (Hoffman et al. 2021). Another said that unless a new tool enabled successful performance on 85% of the key tasks on first use, then the tool would not be desired.…”
Section: Analyzing the Resultsmentioning
confidence: 99%
“…Engaging these stakeholders ensures that ethical guidelines are consistent with existing regulations and helps in shaping future policies on AI ethics (Delgado, et. al., 2021, Hoffman, et. al., 2021, Miller, 2022.…”
Section: Collaboration and Stakeholder Involvementmentioning
confidence: 99%