2022
DOI: 10.1049/ipr2.12549
|View full text |Cite
|
Sign up to set email alerts
|

CEFusion: Multi‐Modal medical image fusion via cross encoder

Abstract: Most existing deep learning-based multi-modal medical image fusion (MMIF) methods utilize single-branch feature extraction strategies to achieve good fusion performance. However, for MMIF tasks, it is thought that this structure cuts off the internal connections between source images, resulting in information redundancy and degradation of fusion performance. To this end, this paper proposes a novel unsupervised network, termed CEFusion. Different from existing architecture, a cross-encoder is designed by explo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 42 publications
0
4
0
Order By: Relevance
“…All methods are implemented in MATLAB 2022b. For comparisons, we use seven algorithms, namely Laplacian pyramid and CNN reconstruction with local gradient energy strategy (LPCNNR) (Fu et al, 2020), an unsupervised Enhanced Medical image fusion network (EM Fusion) (Xu and Ma, 2021), MRPAN (Fu et al, 2021), a non-subsampled contourlet transform and CNN (NSCT-CNN) (Wang et al, 2021a), a Cross Encoder Fusion (CEFusion) (Zhu et al, 2022), FDGNet (Zhang et al, 2023b) and Joint Sparse Model with Coupled Dictionary (JSM-CD) (Zhang et al, 2023a) along with five metrics namely, Average Pixel Intensity (API) or mean (F ̅ ), Entropy (H), Average Gradient (AG), overall fusion efficiency (QAB/F) and information loss during fusion process (L AB/F ) (Goyal et al, 2023) to evaluate techniques subjectively and objectively.…”
Section: Datasets and Experimental Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…All methods are implemented in MATLAB 2022b. For comparisons, we use seven algorithms, namely Laplacian pyramid and CNN reconstruction with local gradient energy strategy (LPCNNR) (Fu et al, 2020), an unsupervised Enhanced Medical image fusion network (EM Fusion) (Xu and Ma, 2021), MRPAN (Fu et al, 2021), a non-subsampled contourlet transform and CNN (NSCT-CNN) (Wang et al, 2021a), a Cross Encoder Fusion (CEFusion) (Zhu et al, 2022), FDGNet (Zhang et al, 2023b) and Joint Sparse Model with Coupled Dictionary (JSM-CD) (Zhang et al, 2023a) along with five metrics namely, Average Pixel Intensity (API) or mean (F ̅ ), Entropy (H), Average Gradient (AG), overall fusion efficiency (QAB/F) and information loss during fusion process (L AB/F ) (Goyal et al, 2023) to evaluate techniques subjectively and objectively.…”
Section: Datasets and Experimental Detailsmentioning
confidence: 99%
“…The CNN was used and merged with SR by Shibu and Priyadharsini (2021). A cross-encoder (CE)-based technique for image synthesis was presented by Zhu et al (2022) and Tawfik et al (2022) introduced the usage of SAE in order to fuse medical photographs in the NSCT domain. An attention network called the Multi-scale Residual Pyramid (MRPAN) was proposed by Fu et al (2020).…”
Section: Introductionmentioning
confidence: 99%
“…One of the major reasons for the limited development of medical image processing algorithms relative to natural image processing algorithms is the fact that medical images do not have as large and highquality datasets as natural images [6][7]. Before the development and application of CAD, doctors reconstructed the imaging according to the actual needs when taking medical images, resulting in the existence of a large amount of image data with poor depth-dimensional resolution [8][9].…”
Section: Introductionmentioning
confidence: 99%
“…Multi-modal feature integration [6] is an important task of multi-modal learning. Many existing fusion methods can be categorized into simple operation-based fusion [7][8][9][10][11], bilinear pooling fusion [12][13][14][15][16][17], and attention-based Fusion [18][19][20][21].…”
Section: Introductionmentioning
confidence: 99%