“…Thus, there is a clear need for human readability and interpretability of deep networks, which requires identified lesions to be interpreted and quantified. We, therefore, developed an explainable AI system in a cloud framework, labeled the “COVLIAS 2.0-cXAI” system, which was our primary novelty [ 47 , 48 , 49 , 50 , 51 , 52 ]. The COVLIAS 2.0-cXAI design consisted of three stages ( Figure 1 ): (i) automated lung segmentation using the hybrid deep learning ResNet-UNet model using automatic adjustment of Hounsfield units [ 53 ], hyperparameter optimization [ 54 ], and the parallel and distributed nature of design during training; (ii) classification using three kinds of DenseNet (DN) models (DN-121, DN-169, DN-201) [ 55 , 56 , 57 , 58 ]; and (iii) scientific validation using four kinds of class activation mapping (CAM) visualization techniques: gradient-weighted class activation mapping (Grad-CAM) [ 59 , 60 , 61 , 62 , 63 ], Grad-CAM++ [ 64 , 65 , 66 , 67 ], score-weighted CAM (Score-CAM) [ 68 , 69 , 70 ], and FasterScore-CAM [ 71 , 72 ].…”