The traditional medical image fusion methods, such as the famous multi-scale decomposition-based methods, usually suffer from the bad sparse representations of the salient features and the low ability of the fusion rules to transfer the captured feature information. In order to deal with this problem, a medical image fusion method based on the scale invariant feature transformation (SIFT) descriptor and the deep convolutional neural network (CNN) in the shift-invariant shearlet transform (SIST) domain is proposed. Firstly, the images to be fused are decomposed into the high-pass and the low-pass coefficients. Then, the fusion of the high-pass components is implemented under the rule based on the pre-trained CNN model, which mainly consists of four steps: feature detection, initial segmentation, consistency verification, and the final fusion; the fusion of the low-pass subbands is based on the matching degree computed by the SIFT descriptor to capture the features of the low frequency components. Finally, the fusion results are obtained by inversion of the SIST. Taking the typical standard deviation, QAB/F, entropy, and mutual information as the objective measurements, the experimental results demonstrate that the detailed information without artifacts and distortions can be well preserved by the proposed method, and better quantitative performance can be also obtained.
The traditional methods for multi-focus image fusion, such as the typical multi-scale geometric analysis theory-based methods, are usually restricted by sparse representation ability and the transferring efficiency of the fusion rules for the captured features. Aiming to integrate the partially focused images into the fully focused image with high quality, the complex shearlet features-motivated generative adversarial network is constructed for multi-focus image fusion in this paper. Different from the popularly used wavelet, contourlet, and shearlet, the complex shearlet provides more flexible multiple scales, anisotropy, and directional sub-bands with the approximate shift invariance. Therefore, the features in complex shearlet domain are more effective. With of help of the generative adversarial network, the whole procedure of multi-focus fusion is modeled to be the process of adversarial learning. Finally, several experiments are implemented and the results prove that the proposed method outperforms the popularly used fusion algorithms in terms of four typical objective metrics and the comparison of visual appearance.
Image denoising plays an important role in CT sampling. Although good results have been achieved by filter-based, model-based and deep learning-based denoising methods, there are still many demanding constraints in real applications. In order to improve accuracy and efficiency, a residual learning network based on broad learning system (ReBLS) is proposed. First, features are extracted from the noisy images to generate a feature node layer. Subsequently, through a random matrix after orthogonal normalization, the feature nodes are reinforced into an enhanced node layer. Finally, the noise will be removed by the residual network. Compared with commonly used deep learning scheme, the broad learning replaces depth with expansion in breadth, the training and process time of broad learning is much shorter. Meanwhile, the learning of residuals can simplify the learning process and enhance the feature extraction. Experiments on the open dataset fully prove the accuracy and efficiency by comparing with five state-of-the-art denoising methods.
The usage of the point cloud surface reconstruction to generate high-precision 3D models has been widely applied in various fields. In order to deal with the problems of insufficient accuracy, pseudo-surfaces and high time cost caused by the traditional surface reconstruction algorithms of the point cloud data, this paper proposes an improved Poisson surface reconstruction algorithm based on the boundary constraints. For large density point cloud data obtained from 3D laser scanning, the proposed method firstly uses an octree instead of the KD-tree to search the near neighborhood; then, it uses the Open Multi-Processing (OpenMP) to accelerate the normal estimation based on the moving least squares algorithm; meanwhile, the leastcost spanning tree is employed to adjust the consistency of the normal direction; and finally a screened Poisson algorithm with the Neumann boundary constraints is proposed to reconstruct the point cloud. Compared with the traditional methods, the experiments on three open datasets demonstrated that the proposed method can effectively reduce the generation of pseudosurfaces. The reconstruction time of the proposed algorithm is about 16% shorter than that of the traditional Poisson reconstruction algorithm, and produce better reconstruction results in the term of quantitative analysis and visual comparison.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.