The lack of interpretability in artificial intelligence models (i.e., deep learning, machine learning, and rules-based) is an obstacle to their widespread adoption in the healthcare domain. The absence of understandability and transparency frequently leads to (i) inadequate accountability and (ii) a consequent reduction in the quality of the predictive results of the models. On the other hand, the existence of interpretability in the predictions of AI models will facilitate the understanding and trust of the clinicians in these complex models. The data protection regulations worldwide emphasize the relevance of the plausibility and verifiability of AI models’ predictions. In response and to take a role in tackling this challenge, we designed the interpretability-based model with algorithms that achieve human-like reasoning abilities through statistical analysis of the datasets by calculating the relative weights of the variables of the features from the medical images and the patient symptoms. The relative weights represented the importance of the variables in predictive decision-making. In addition, the relative weights were used to find the positive and negative probabilities of having the disease, which indicated high fidelity explanations. Hence, the primary goal of our model is to shed light and give insights into the prediction process of the models, as well as to explain how the model predictions have resulted. Consequently, our model contributes by demonstrating accuracy. Furthermore, two experiments on COVID-19 datasets demonstrated the effectiveness and interpretability of the new model.
COVID-19 pandemic has been spreading globally and has been influencing the daily life of human beings in addition to the economies of most countries around the globe. Early and accurate detection of COVID-19 coronavirus is crucial to prevent and control its outbreak using medical treatment and timely quarantine. The daily massive increases in the cases of COVID-19 patients worldwide and the limited solutions of the available diagnosing techniques have resulted in difficulties in pointing out the presence of the disease. Wherefore, the necessity arises to find other alternatives by leveraging the artificial intelligence (AI) models which create intelligent entities that have demonstrated themselves particularly successful due to their spectacular innovations in video processing and image, in addition to their highly accurate projection models. This survey contributes to studying the state of the art of the AI models that have been fighting against the COVID-19, highlighting the limitations that are significant and present noteworthy barriers to struggle with a pandemic, and recommends the trends for the incoming research on the pandemic.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.