Multimodal medical image fusion helps in combining contrasting features from two or more input imaging modalities to represent fused information in a single image. One of the pivotal clinical applications of medical image fusion is the merging of anatomical and functional modalities for fast diagnosis of malign tissues. In this paper, we present a novel end-to-end unsupervised learning based Convolutional neural network (CNN) for fusing the high and low frequency components of MRI-PET grayscale image pairs publicly available at ADNI by exploiting Structural Similarity Index (SSIM) as the loss function during training. We then apply color coding for the visualization of the fused image by quantifying the contribution of each input image in terms of the partial derivatives of the fused image. We find that our fusion and visualization approach results in better visual perception of the fused image, while also comparing favorably to previous methods when applying various quantitative assessment metrics.
Image fusion helps in merging two or more images to construct a more informative single fused image. Recently, unsupervised learning-based convolutional neural networks (CNN) have been used for different types of image-fusion tasks such as medical image fusion, infrared-visible image fusion for autonomous driving as well as multi-focus and multi-exposure image fusion for satellite imagery. However, it is challenging to analyze the reliability of these CNNs for the image-fusion tasks since no groundtruth is available. This led to the use of a wide variety of model architectures and optimization functions yielding quite different fusion results. Additionally, due to the highly opaque nature of such neural networks, it is difficult to explain the internal mechanics behind its fusion results. To overcome these challenges, we present a novel real-time visualization tool, named FuseVis, with which the end-user can compute per-pixel saliency maps that examine the influence of the input image pixels on each pixel of the fused image. We trained several image fusion-based CNNs on medical image pairs and then using our FuseVis tool we performed case studies on a specific clinical application by interpreting the saliency maps from each of the fusion methods. We specifically visualized the relative influence of each input image on the predictions of the fused image and showed that some of the evaluated image-fusion methods are better suited for the specific clinical application. To the best of our knowledge, currently, there is no approach for visual analysis of neural networks for image fusion. Therefore, this work opens a new research direction to improve the interpretability of deep fusion networks. The FuseVis tool can also be adapted in other deep neural network-based image processing applications to make them interpretable.
<p>A low-cost and simple distributed sensors model that is particularly suitable for measuring eye blink of the driver, accident and hand position on a steering wheel. These sensors can be used in automotive active safety systems that aim at detecting driver’s fatigue, a major issue to prevent road accidents. The key point of this approach is to design a prototype of sensor units, so that it can serve as platform for integrating different kinds of sensors into the steering wheel. Since the sensors are attached to the steering wheel, therefore they can’t be detached by the driver. It will also detect dangerous stylish driving which may lead to fatal accidents. The major drawback is that the eye blink sensors frame worn by the driver can be removed causing the sensor non-operational. The outcome is that the vibrator attached to eye blink sensor’s frame vibrates if the driver shuts his eyes for approximately 3 seconds and also the LCD displays the respective warning message. The wheel is slowed or stopped depending on the condition. This is accompanied by the vehicle’s owner being notified through the GSM module, so the owner can retrieve the driver’s location, photograph and a list of nearby police stations through an android mobile application. Therefore, driver can be alerted during drowsiness and the owner can be notified simultaneously.</p>
Medical image fusion enhances the significant and the valuable information such as exact abnormality localisation of the multimodal medical images. In the field of clinical environment, medical imaging acts as an important role in helping the doctors/radiologists. The information available in the images is important during the diagnosis. This can be enhanced using the multimodal medical image fusion technique through the integration of the information from several imaging modalities. Nowadays, several methodologies are proposed for fusing the medical images. Yet, the multimodal medical image fusion remains as a challenging task owing to the deprivation of medical images at the phase of acquisition. To handle this problem, this paper plans to develop the enhanced multi-objective medical image fusion model. Before initiating the fusion process, both the images to be fused are split into high-frequency sub-bands and low-frequency sub-bands by the improved Fast Discrete Curvelet Transform (FDCuT). Here, the fusion of low-frequency sub-images is accomplished by the averaging method, and high-frequency sub-images are fused by the optimised Type-2 fuzzy entropy. Both the FDCuT and Type-2 fuzzy entropy are enhanced by the multi-objective meta-heuristic algorithm by Adaptive Electric fish optimisation (A-EFO). The multi-objective function focuses on the “Peak Signal to Noise Ratio (PSNR), Structural SIMilarity (SSIM), Feature SIMilarity (FSIM)”. The comparison of the developed methodology over the traditional approaches observes enhanced performance with respect to visual quality measures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.