In image-based medical decision-making, different modalities of medical images of a given organ of a patient are captured. Each of these images will represent a modality that will render the examined organ differently, leading to different observations of a given phenomenon (such as stroke). The accurate analysis of each of these modalities promotes the detection of more appropriate medical decisions. Multimodal medical imaging is a research field that consists in the development of robust algorithms that can enable the fusion of image information acquired by different sets of modalities. In this paper, a novel multimodal medical image fusion algorithm is proposed for a wide range of medical diagnostic problems. It is based on the application of a boundary measured pulse-coupled neural network fusion strategy and an energy attribute fusion strategy in a non-subsampled shearlet transform domain. Our algorithm was validated in dataset with modalities of several diseases, namely glioma, Alzheimer's, and metastatic bronchogenic carcinoma, which contain more than 100 image pairs. Qualitative and quantitative evaluation verifies that the proposed algorithm outperforms most of the current algorithms, providing important ideas for medical diagnosis.
The trending global pandemic of COVID-19 is the fastest ever impact which caused people worldwide by severe acute respiratory syndrome~(SARS)-driven coronavirus. However, several countries suffer from the shortage of test kits and high false negative rate in PCR test. Enhancing the chest X-ray or CT detection rate becomes critical. The patient triage is of utmost importance and the use of machine learning can drive the diagnosis of chest X-ray or CT image by identifying COVID-19 cases. To tackle this problem, we propose~COVIDPEN~-~a transfer learning approach on Pruned EfficientNet-based model for the detection of COVID-19 cases. The proposed model is further interpolated by post-hoc analysis for the explainability of the predictions. The effectiveness of our proposed model is demonstrated on two systematic datasets of chest radiographs and computed tomography scans. Experimental results with several baseline comparisons show that our method is on par and confers clinically explicable instances, which are meant for healthcare providers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.