2020
DOI: 10.1016/j.compbiomed.2020.103823
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal medical image fusion by Laplacian pyramid and adaptive sparse representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
41
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 111 publications
(41 citation statements)
references
References 22 publications
0
41
0
Order By: Relevance
“…(1) The proposed method applies the image detail retention capability of LatLRR while fully extracting the high-frequency information of an image by iteratively decomposing the original image. It compensates for the deficiency of LatLRR and enhances the display of the lesion by superimposing saliency parts (2) The feature map of the low-rank part of the original image is extracted using the VGG-19 network and then scaled up to match the size of the original image. The weight map generated in this way can well fit the low-rank part of the original image with pixel information blockwise distributed…”
Section: Computational and Mathematical Methods In Medicinementioning
confidence: 99%
See 1 more Smart Citation
“…(1) The proposed method applies the image detail retention capability of LatLRR while fully extracting the high-frequency information of an image by iteratively decomposing the original image. It compensates for the deficiency of LatLRR and enhances the display of the lesion by superimposing saliency parts (2) The feature map of the low-rank part of the original image is extracted using the VGG-19 network and then scaled up to match the size of the original image. The weight map generated in this way can well fit the low-rank part of the original image with pixel information blockwise distributed…”
Section: Computational and Mathematical Methods In Medicinementioning
confidence: 99%
“…Then, the processed coefficients can be converted to the fused image by inverse MST. According to the different decomposition methods of source images, MST can be divided into pyramid-based methods [2][3][4], wavelet-based methods [5][6][7][8], and multiscale geometric analysis-(MGA-) based methods [9][10][11][12][13][14][15][16][17][18]. Due to the limitation of preset functions in the MST-based algorithm, some essential features of the source images, such as edge and texture information, may not be well expressed and extracted, which significantly reduces the fusion performance.…”
Section: Introductionmentioning
confidence: 99%
“…To overcome the over-fitting problem and accelerate the network convergence, CONV2 sets up the ReLU activation function [30] and the batch normalization function (BN) [31]. Through the convolution operation of the two convolution layers, 64 feature maps of 224×224 In the fusion stage, the pixel-level fusion rules are used to fuse the convolution features of multiple inputs, as expressed in equation (9). Generally speaking, there are three commonly used fusion rules, namely element-level maximum, element-level sum and element-level average.…”
Section: A Gabor Representation Of Medical Imagesmentioning
confidence: 99%
“…In transform domain, fusion methods can be classified as follows: pyramid-based methods, and wavelet transform methods. Based on the pyramid transformation: the laplacian pyramid, the gaussian pyramid, the contrast pyramid, and the morphological pyramid [7][8][9][10], these methods fail to involve spatial direction selectivity in the decomposition process, resulting in the block effect, and meanwhile produce many artifacts in the edge of the fused image affecting the fusion result; wavelet transform fusion methods, such as discrete wavelet transform, redundant wavelet transform and multiwavelet transform [11][12][13], only capture limited directional information, obtain limited information on edges and texture areas, and cannot clearly characterize the edges of the images. In recent years, with the rise of deep learning, Convolutional Neural Network (CNN) as an important branch of deep learning has stronger feature extraction capabilities than traditional methods, and is more suitable for image fusion [14][15][16].…”
Section: Introductionmentioning
confidence: 99%
“…Medical images can be divided into the following categories: Computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), single-photon emission CT (SPECT) etc. Usually, there is no single imaging method that can reflect the complete tissue information; medical image fusion technology can retain the diagnostic information of input image to the maximum extent [ 3 , 4 ]. Figure 1 shows the example of image fusion, it involves not only medicine, but also a multifocus image and remote sensing image.…”
Section: Introductionmentioning
confidence: 99%