2022
DOI: 10.1109/tetc.2022.3171314
|View full text |Cite
|
Sign up to set email alerts
|

The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

Abstract: Established approaches to assuring safety-critical systems and software are difficult to apply to systems employing ML where there is no clear, pre-defined specification against which to assess validity. This problem is exacerbated by the "opaque" nature of ML where the learnt model is not amenable to human scrutiny. Explainable AI (XAI) methods have been proposed to tackle this issue by producing human-interpretable representations of ML models which can help users to gain confidence and build trust in the ML… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 37 publications
(17 citation statements)
references
References 42 publications
0
17
0
Order By: Relevance
“…Although AI models have achieved human-like performance, their use is still limited, partly because they are seen as a black box [23,24]. As presented by Jia et al [25], the explainability in an emerging issue, particularly in ML-based healthcare systems. The problem with the use of AI-based tools in medicine continues to be the lack of confidence of medical professionals in such solutions and the perception that they lack the 'intuition' that experienced professionals possess [26,27].…”
Section: Discussionmentioning
confidence: 99%
“…Although AI models have achieved human-like performance, their use is still limited, partly because they are seen as a black box [23,24]. As presented by Jia et al [25], the explainability in an emerging issue, particularly in ML-based healthcare systems. The problem with the use of AI-based tools in medicine continues to be the lack of confidence of medical professionals in such solutions and the perception that they lack the 'intuition' that experienced professionals possess [26,27].…”
Section: Discussionmentioning
confidence: 99%
“…Another operational mitigation is to show the clinician similar patients from the TD to the one which the predictor has been applied to (i.e., prototypical examples as described in [10]). This would allow the clinician to review similar cases, their progression, and provides context to a particular prediction.…”
Section: Discussionmentioning
confidence: 99%
“…This can both lower trust and safety of these systems. [19][20][21] Another crucial issue to consider is the data. As AI-based are being trained on existing data, the model can only be as good as the data used for distilling the information into the system.…”
Section: Network Examples Of DL Are Convolution Neuralmentioning
confidence: 99%