Artificial Intelligence in Medicine 2022
DOI: 10.1007/978-981-19-1223-8_11
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable AI in Healthcare: Enhancing Fairness, Safety, and Trust

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 16 publications
1
6
0
Order By: Relevance
“…Integrating an AI system into a health care system is a multistep process that requires careful planning and execution. This process includes setting up the necessary infrastructure and technology to support the AI system, connecting it to other existing health technologies, training health care professionals on how to use and maintain the AI system, regularly monitoring its performance and making adjustments as needed, and ensuring compliance with all relevant regulations and guidelines for the use of AI in health care [58][59][60][61][62]. However, the implementation of these processes is not always straightforward.…”
Section: Model Deployment Operationalization Monitoring and Maintenancementioning
confidence: 99%
“…Integrating an AI system into a health care system is a multistep process that requires careful planning and execution. This process includes setting up the necessary infrastructure and technology to support the AI system, connecting it to other existing health technologies, training health care professionals on how to use and maintain the AI system, regularly monitoring its performance and making adjustments as needed, and ensuring compliance with all relevant regulations and guidelines for the use of AI in health care [58][59][60][61][62]. However, the implementation of these processes is not always straightforward.…”
Section: Model Deployment Operationalization Monitoring and Maintenancementioning
confidence: 99%
“…As medical professionals increasingly rely on AI-powered systems to aid in diagnosis and treatment planning [18], the need for interpretability and transparency in AI models becomes paramount [19]. Deep learning models, including ViTs, often exhibit highly complex and intricate internal representations, making it challenging for experts to comprehend their decision-making process.…”
Section: Introductionmentioning
confidence: 99%
“…In general, studies using DL show excellent predictive performance, providing hope for successful translation into clinical practice 13 , 14 . However, prediction accuracy in DL comes with potential pitfalls which need to be overcome before wider adoption can be eventuated 15 .…”
Section: Introductionmentioning
confidence: 99%