2020
DOI: 10.1007/978-3-030-60117-1_33
|View full text |Cite
|
Sign up to set email alerts
|

Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
51
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 124 publications
(51 citation statements)
references
References 24 publications
0
51
0
Order By: Relevance
“…Hence there are rich opportunities for HCI researchers and design practitioners to contribute insights, solutions, and methods to make AI more explainable. A research community of human-centered XAI [33,35,99] has emerged, which bring in cognitive, sociotechnical, design perspectives, and more. We hope this chapter serves as a call to engagement in this inter-disciplinary endeavor by presenting a selected overview of recent AI and HCI work on the topic of XAI.…”
Section: Introductionmentioning
confidence: 99%
“…Hence there are rich opportunities for HCI researchers and design practitioners to contribute insights, solutions, and methods to make AI more explainable. A research community of human-centered XAI [33,35,99] has emerged, which bring in cognitive, sociotechnical, design perspectives, and more. We hope this chapter serves as a call to engagement in this inter-disciplinary endeavor by presenting a selected overview of recent AI and HCI work on the topic of XAI.…”
Section: Introductionmentioning
confidence: 99%
“…XAI systems are required to fully understand the user, which means to adapt to the one that receives the explanation [21,336]. This is crucial to determine the explanation requirements for a given problem, and understand the 'why' behind user actions [337]. Furthermore, understanding is required to adapt to its socio-technical environment since the AI user will interact with other humans outside of the 1-1 human-computer interaction, and thus trust should be transitive to them [334].…”
Section: Xai In Atm Synthesismentioning
confidence: 99%
“…In healthcare, the use of standardized clinical guidelines (e.g., [31,38] for MDD treatment selection) means that aspects of these mental models are established, encodable, and may therefore be used in both the development of the machine learning models and the design of DST interfaces. Ehsan and Riedl have also suggested that allowing users to voice skepticism in AI models may afford new interactions that encourage users to consider the limitations of the technology [16]. In the context of healthcare decisions, allowing clinicians and patients to voice skepticism and highlight surprising DST outputs may allow the underlying models to adapt as medical guidelines continue to evolve.…”
Section: Adapting Decision Support For Contrasting Informationmentioning
confidence: 99%