Multi-modal image fusion techniques aid the medical experts in better disease diagnosis by providing adequate complementary information from multi-modal medical images. These techniques enhance the effectiveness of medical disorder analysis and classification of results. This study aims at proposing a novel technique using deep learning for the fusion of multi-modal medical images. The modified 2D Adaptive Bilateral Filters (M-2D-ABF) algorithm is used in the image pre-processing for filtering various types of noises. The contrast and brightness are improved by applying the proposed Energy-based CLAHE algorithm in order to preserve the high energy regions of the multimodal images. Images from two different modalities are first registered using mutual information and then registered images are fused to form a single image. In the proposed fusion scheme, images are fused using Siamese Neural Network and Entropy (SNNE)-based image fusion algorithm. Particularly, the medical images are fused by using Siamese convolutional neural network structure and the entropy of the images. Fusion is done on the basis of score of the SoftMax layer and the entropy of the image. The fused image is segmented using Fast Fuzzy C Means Clustering Algorithm (FFCMC) and Otsu Thresholding. Finally, various features are extracted from the segmented regions. Using the extracted features, classification is done using Logistic Regression classifier. Evaluation is performed using publicly available benchmark dataset. Experimental results using various pairs of multi-modal medical images reveal that the proposed multi-modal image fusion and classification techniques compete the existing state-of-the-art techniques reported in the literature.
This research work aims to implement an automated segmentation process to extract the endoplasmic reticulum (ER) network in fluorescence microscopy images (FMI) using pretrained convolutional neural network (CNN). The threshold level of the raw FMT is complex, and extraction of the ER network is a challenging task. Hence, an image conversion procedure is initially employed to reduce its complexity. This work employed the pretrained CNN schemes, such as VGG-UNet and VGG-SegNet, to mine the ER network from the chosen FMI test images. The proposed ER segmentation pipeline consists of the following phases; (i) clinical image collection, 16-bit to 8-bit conversion and resizing; (ii) implementation of pretrained VGG-UNet and VGG-SegNet; (iii) extraction of the binary form of ER network; (iv) comparing the mined ER with ground-truth; and (v) computation of image measures and validation. The considered FMI dataset consists of 223 test images, and image augmentation is then implemented to increase these images. The result of this scheme is then confirmed against other CNN methods, such as U-Net, SegNet, and Res-UNet. The experimental outcome confirms a segmentation accuracy of >98% with VGG-UNet and VGG-SegNet. The results of this research authenticate that the proposed pipeline can be considered to examine the clinical-grade FMI.
Vehicular ad hoc network (VANET) is a type of mobile ad hoc network (MANET) that forms vehicles as nodes. Routing is the basic fundamental requirement of VANET applications. Therefore, it is necessary to devise a routing protocol that fits well for rapid topology changes and disconnected network conditions. To address these specific needs of VANET, we present a novel greedy routing protocol for vehicular networks called VRPTA that suite well for both city environment and the high way environment. With the help of localisation system named global positioning system (GPS), the proposed protocol is designed to efficiently relay the data in the network by considering different scenarios like road traffic variation and various environment characteristics. The protocol communicates in between vehicles as well as vehicle to infrastructure whichever is applicable, thereby ensuring reliable transmission. In addition, we also consider the information about vehicles speed, direction and density of a city traffic configuration consisting of double direction roads, multi lanes and highway scenario. The work is implemented using NS2 simulator.
Vehicular ad hoc network (VANET) is a type of mobile ad hoc network (MANET) that forms vehicles as nodes. Routing is the basic fundamental requirement of VANET applications. Therefore, it is necessary to devise a routing protocol that fits well for rapid topology changes and disconnected network conditions. To address these specific needs of VANET, we present a novel greedy routing protocol for vehicular networks called VRPTA that suite well for both city environment and the high way environment. With the help of localisation system named global positioning system (GPS), the proposed protocol is designed to efficiently relay the data in the network by considering different scenarios like road traffic variation and various environment characteristics. The protocol communicates in between vehicles as well as vehicle to infrastructure whichever is applicable, thereby ensuring reliable transmission. In addition, we also consider the information about vehicles speed, direction and density of a city traffic configuration consisting of double direction roads, multi lanes and highway scenario. The work is implemented using NS2 simulator.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.