2022
DOI: 10.1016/j.ijhcs.2022.102839
|View full text |Cite
|
Sign up to set email alerts
|

Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(9 citation statements)
references
References 77 publications
2
7
0
Order By: Relevance
“…The results of the interviews indicated that the need for explanations was mainly in cases where AI outcomes were new or anomalous to respondents. This finding is also in line with previous studies' findings outside AR [67,102].…”
Section: Resultssupporting
confidence: 94%
See 3 more Smart Citations
“…The results of the interviews indicated that the need for explanations was mainly in cases where AI outcomes were new or anomalous to respondents. This finding is also in line with previous studies' findings outside AR [67,102].…”
Section: Resultssupporting
confidence: 94%
“…(3) Reliability. Ensuring the reliability of AI outcomes is essential for non-trivial decision-making processes so that users can rely on a trustworthy system [102,124,178], e.g., daily activity recommendations for personal health management or automatic emergency service contacting in safety-threatening incidents. (4) Informativeness.…”
Section: Platform-agnostic Factorsmentioning
confidence: 99%
See 2 more Smart Citations
“…To delineate the association between uncertainty and trust it is important to discern two distinct types of uncertainty: epistemic uncertainty and ontic uncertainty. Epistemic uncertainty is characterized by an observer’s lack of knowledge about the state of a system at a particular time, which can be resolved through acquiring additional information or knowledge (Busemeyer et al, 2020; Jiang et al, 2022). Ontic uncertainty describes a person’s internal uncertainty regarding a specific response, such as a decision amongst different options and may only be resolved through some sort of interaction(s) (Busemeyer et al, 2020; Busemeyer & Bruza, 2014; Kvam et al, 2021).…”
Section: Trust In Human-ai Decision-makingmentioning
confidence: 99%