2022
DOI: 10.1080/10447318.2022.2095093
|View full text |Cite
|
Sign up to set email alerts
|

What Are the Users’ Needs? Design of a User-Centered Explainable Artificial Intelligence Diagnostic System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 18 publications
(29 citation statements)
references
References 74 publications
0
29
0
Order By: Relevance
“…Publication types included 23 (72%) journal papers ( Tables 2 and 3 ) [ 10 , 12 , 15 , 17 , 34 , 37 , 39 , 41 , 43 , 45 - 49 , 52 - 62 ] and 9 (28%) conference papers ( Table 4 ) [ 13 , 35 , 36 , 38 , 40 , 42 , 44 , 50 , 51 ]. Study designs included quantitative research (22/32, 69%) [ 12 , 13 , 15 , 34 , 37 , 39 , 40 , 42 - 44 , 47 - 52 , 54 , 55 , 57 - 59 , 61 ], qualitative research (2/32, 6%) [ 35 , 60 ], and mixed methods studies (4/32, 12%) [ 38 , 41 , 45 , 46 ], in addition to systematic reviews (4/32, 12%) [ 17 , 36 , 53 , 56 ]. Most studies chose general practice (8/32, 25%) [ 34 , 37 , 40 , 41 , 46 , 49 , 54 , 55 ] as the target medical field.…”
Section: Resultsmentioning
confidence: 99%
“…Publication types included 23 (72%) journal papers ( Tables 2 and 3 ) [ 10 , 12 , 15 , 17 , 34 , 37 , 39 , 41 , 43 , 45 - 49 , 52 - 62 ] and 9 (28%) conference papers ( Table 4 ) [ 13 , 35 , 36 , 38 , 40 , 42 , 44 , 50 , 51 ]. Study designs included quantitative research (22/32, 69%) [ 12 , 13 , 15 , 34 , 37 , 39 , 40 , 42 - 44 , 47 - 52 , 54 , 55 , 57 - 59 , 61 ], qualitative research (2/32, 6%) [ 35 , 60 ], and mixed methods studies (4/32, 12%) [ 38 , 41 , 45 , 46 ], in addition to systematic reviews (4/32, 12%) [ 17 , 36 , 53 , 56 ]. Most studies chose general practice (8/32, 25%) [ 34 , 37 , 40 , 41 , 46 , 49 , 54 , 55 ] as the target medical field.…”
Section: Resultsmentioning
confidence: 99%
“…Because underlying methods are generally not transparent, users of AI may overtrust or undertrust the system, leading to errors (Okamura & Yamada, 2020). Explainability methods can be complex for end users and have a similar result of overtrust (Ghassemi et al, 2021;He et al, 2022). Learning systems can be dynamic, such that performance may not be consistent.…”
Section: Unique Features Of Ai Technology That May Impact Well-beingmentioning
confidence: 99%
“…In addition to the nine categories of Liao et al [20], the question category transparency was added on a theoretical basis [17]. He et al [12] developed a user needs library based on the XAIQB, which they adapted to a medical context based on their literature review. They then designed an XAI prototype for the medical domain and analyzed the needs and preferences of consumer users with respect to explanations [12].…”
Section: Explainable Ai Question Bankmentioning
confidence: 99%
“…He et al [12] developed a user needs library based on the XAIQB, which they adapted to a medical context based on their literature review. They then designed an XAI prototype for the medical domain and analyzed the needs and preferences of consumer users with respect to explanations [12]. However, to the best of our knowledge, there is no study that empirically evaluated the XAIQB and its applicability for end-users directly.…”
Section: Explainable Ai Question Bankmentioning
confidence: 99%