2023
DOI: 10.1109/tts.2023.3234203
|View full text |Cite
|
Sign up to set email alerts
|

Challenges of Deep Learning in Medical Image Analysis—Improving Explainability and Trust

Abstract: Deep learning has revolutionized the detection of diseases and is helping the healthcare sector break barriers in terms of accuracy and robustness to achieve efficient and robust computer-aided diagnostic systems. The application of deep learning techniques empowers automated AI-based utilities requiring minimal human supervision to perform any task related to medical diagnosis of fractures, tumors, and internal hemorrhage; preoperative planning; intra-operative guidance, etc. But deep learning faces some majo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 108 publications
(23 citation statements)
references
References 40 publications
0
23
0
Order By: Relevance
“…The selected Deep Learning schema for the detection of key features of the flow waveform is most suitable for the aim of the study, since the convolutional kernels are specifically designed to identify such types of patterns that separate two classes. The shallow architecture and the use of weighted loss mitigate against the relatively small and unbalanced dataset, respectively, both of which are common issues that occur when applying deep learning algorithms to medical data [ 27 ]. Additionally, the computation method used to calculate the PTP, based on which the model was trained, was compared to experts’ manual analysis and found to be accurate.…”
Section: Discussionmentioning
confidence: 99%
“…The selected Deep Learning schema for the detection of key features of the flow waveform is most suitable for the aim of the study, since the convolutional kernels are specifically designed to identify such types of patterns that separate two classes. The shallow architecture and the use of weighted loss mitigate against the relatively small and unbalanced dataset, respectively, both of which are common issues that occur when applying deep learning algorithms to medical data [ 27 ]. Additionally, the computation method used to calculate the PTP, based on which the model was trained, was compared to experts’ manual analysis and found to be accurate.…”
Section: Discussionmentioning
confidence: 99%
“…Although the sophisticated mathematics underlying deep learning training algorithms is conceptually understandable, their architectures are more of a "black box" paradigm. In the case of the breast density CycleGAN, one must comprehend the learned mapping via post hoc explainability [42]. The post hoc explanation is a task that will be performed in future work.…”
Section: Discussionmentioning
confidence: 99%
“…This interpretability enables doctors to make changes or enhancements to the model, ensuring its accuracy. Interpretable models also help with the ethical issues of AI use in healthcare [145]. In the context of prenatal abnormality detection, it is critical to verify that AI system decisions are fair, unbiased, and in accordance with accepted medical norms.…”
Section: Interpretable Models For Clinical Decision Supportmentioning
confidence: 99%