The development of medical imaging AI systems for evaluating COVID‐19 patients has demonstrated potential for improving clinical decision‐making and assessing patient outcomes during the recent COVID‐19 pandemic. These have been applied to many medical imaging tasks including disease diagnosis and patient prognosis, as well as augmented other clinical measurements to better inform treatment decisions. Because these systems are used in life‐or‐death decisions, clinical implementation relies on user trust in the AI output. This has caused many developers to utilize explainability techniques in an attempt to help a user understand when an AI algorithm is likely to succeed as well as which cases may be problematic for automatic assessment, thus increasing the potential for rapid clinical translation. AI application to Covid‐19 has been marred with controversy recently. This review discusses several aspects of explainable and interpretable AI as it pertains to the evaluation of COVID‐19 disease and it can restore trust in AI application to this disease. This includes the identification of common tasks that are relevant to explainable medical imaging AI, an overview of several modern approaches for producing explainable output as appropriate for a given imaging scenario, a discussion of how to evaluate explainable AI, and recommendations for best practices in explainable/interpretable AI implementation. This review will allow developers of AI systems for COVID‐19 to quickly understand the basics of several explainable AI techniques and assist in selection of an approach that is both appropriate and effective for a given scenario. This article is protected by copyright. All rights reserved
. The coronavirus disease 2019 (COVID-19) pandemic has wreaked havoc across the world. It also created a need for the urgent development of efficacious predictive diagnostics, specifically, artificial intelligence (AI) methods applied to medical imaging. This has led to the convergence of experts from multiple disciplines to solve this global pandemic including clinicians, medical physicists, imaging scientists, computer scientists, and informatics experts to bring to bear the best of these fields for solving the challenges of the COVID-19 pandemic. However, such a convergence over a very brief period of time has had unintended consequences and created its own challenges. As part of Medical Imaging Data and Resource Center initiative, we discuss the lessons learned from career transitions across the three involved disciplines (radiology, medical imaging physics, and computer science) and draw recommendations based on these experiences by analyzing the challenges associated with each of the three associated transition types: (1) AI of non-imaging data to AI of medical imaging data, (2) medical imaging clinician to AI of medical imaging, and (3) AI of medical imaging to AI of COVID-19 imaging. The lessons learned from these career transitions and the diffusion of knowledge among them could be accomplished more effectively by recognizing their associated intricacies. These lessons learned in the transitioning to AI in the medical imaging of COVID-19 can inform and enhance future AI applications, making the whole of the transitions more than the sum of each discipline, for confronting an emergency like the COVID-19 pandemic or solving emerging problems in biomedicine.
Objective: Developing Machine Learning models for clinical applications from scratch can be a cumbersome task requiring varying levels of expertise. Seasoned developers and researchers may also often face incompatible frameworks and data preparation issues. This is further complicated in the context of diagnostic radiology and oncology applications, given the heterogenous nature of the input data and the specialized task requirements. Our goal is to provide clinicians, researchers, and early AI developers with a modular, flexible, and user-friendly software tool that can effectively meet their needs to explore, train, and test AI algorithms by allowing users to interpret their model results. This latter step involves the incorporation of interpretability and explainability methods that would allow visualizing performance as well as interpreting predictions across the different neural network layers of a deep learning algorithm. Approach: To demonstrate our proposed tool, we have developed the CRP10 AI Application Interface (CRP10AII) as part of the MIDRC consortium. CRP10AII is based on the web service Django framework in Python. CRP10AII/Django/Python in combination with another data manager tool/platform, data commons such as Gen3 can provide a comprehensive while easy to use machine/deep learning analytics tool. The tool allows to test, visualize, interpret how and why the deep learning model is performing. The major highlight of CRP10AII is its capability of visualization and interpretability of otherwise Blackbox AI algorithms. Results: CRP10AII provides many convenient features for model building and evaluation, including: (1) Query and acquire data according to the specific application (e.g., classification, segmentation) from the data common platform (Gen3 here); (2) train the AI models from scratch or use pre-trained models (e.g., VGGNet, AlexNet, BERT) for transfer learning and test the model predictions, performance assessment, receiver operating characteristics curve evaluation; (3) Interpret the AI model predictions using methods like SHAPLEY, LIME values; and (4) Visualize the model learning through heatmaps and activation maps of individual layers of the neural network. Significance: Unexperienced users may have more time to swiftly pre-process, build/train their AI models on their own use-cases, and further visualize and explore these AI models as part of this pipeline, all in an end-to-end manner. CRP10AII will be provided as an open-source tool, and we expect to continue developing it based on users’ feedback.
Building Machine Learning models from scratch for clinical applications can be a challenging undertaking requiring varied levels of expertise. Given the heterogeneous nature of input data and specific task requirements, even seasoned developers and researchers may occasionally run into issues with incompatible frameworks. This is further complicated in the context of diagnostic radiology. Therefore, we developed the CRP10 AI Application Interface (CRP10AII) as a component of the Medical Imaging and Data Resource Center (MIDRC) to deliver a modular and user-friendly software solution that can efficiently address the demands of physicians, early AI developers to explore, train, and test AI algorithms. 37 The CRP10AII tool is python-based web framework that is connected to the data commons (GEN3) that offers the ability to develop AI models from scratch or employ pre-trained models while allowing for visualization and interpretation of the predictions of the AI model. Here, we evaluate the capabilities of CRP10AII and its related human-API interaction factors. This evaluation aims at investigating various aspects of the API, including:(i) robustness and ease of use; (ii) visualization help in decision making tasks; and (iii) necessary further improvements for initial AI researchers with different medical imaging and AI expertise levels. Users initially experienced trouble testing the API; however, the problems have since been fixed as a result of additional explanations. The user evaluation's findings demonstrate that although different options on the API are generally easy to understand, use, and helpful in decision-making tasks for users with and without experience in medical imaging and AI, there are differences in how the various options are understood and used by users. We were also able to collect additional inputs, such as increasing information fields and including more interactive components to make the API more generalizable and customizable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.