2022
DOI: 10.3390/e24060843
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Scale Mixed Attention Network for CT and MRI Image Fusion

Abstract: Recently, the rapid development of the Internet of Things has contributed to the generation of telemedicine. However, online diagnoses by doctors require the analyses of multiple multi-modal medical images, which are inconvenient and inefficient. Multi-modal medical image fusion is proposed to solve this problem. Due to its outstanding feature extraction and representation capabilities, convolutional neural networks (CNNs) have been widely used in medical image fusion. However, most existing CNN-based medical … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 51 publications
0
5
0
Order By: Relevance
“…The decomposition module decomposes the low-light image into lighting map and reflectance map; the illumination enhancement module is responsible for enhancing the low-light image and denoising the reflectance map before outputting the reconstructed image. In recent years, deep neural networks have been widely used in the field of image enhancement with good results due to their powerful nonlinear fitting ability [16]. Jiang Hai et al [17] proposed a novel Real-low to Real-normal Network (R2RNet) based on Retinex theory.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The decomposition module decomposes the low-light image into lighting map and reflectance map; the illumination enhancement module is responsible for enhancing the low-light image and denoising the reflectance map before outputting the reconstructed image. In recent years, deep neural networks have been widely used in the field of image enhancement with good results due to their powerful nonlinear fitting ability [16]. Jiang Hai et al [17] proposed a novel Real-low to Real-normal Network (R2RNet) based on Retinex theory.…”
Section: Related Workmentioning
confidence: 99%
“…In Section 3.3.1, the tandem structure of CAM and SAM suffers from distribution lag in feature extraction, which leads to an overall distortion of the color of the enhanced low-light image, whereas connecting the two in parallel focuses on both the light information and the color information of the low-light image. In addition, it was shown in [16] that low-light images have extremely high local dependencies between neighboring pixel points, While PAM can adaptively rescale the per-pixel weights for all input feature mappings. For this reason, we integrated PAM behind the parallel structure of CAM and SAM to adjust all input features channel-by-channel and pixel-by-pixel.…”
Section: Plos Onementioning
confidence: 99%
“…Liu, et al propose a convolutional based CT and MRI image fusion model MMAN (46). The model consists of three parts: two separate encoder blocks for each modality (CT and MRI), a fusion block and a decoder block (Figure 3).…”
Section: Beyond Cardiovascularmentioning
confidence: 99%
“…We included Liu, et al work since they introduce a novel architecture that should be easily adaptable to cardiovascular imaging, and evaluate their model purely on image fusion itself (46). Soenksen, et al provide a framework on how large health record datasets combined with imaging information can be used for a variety of predictive tasks with a simple joining of architectures (51).…”
Section: Beyond Cardiovascularmentioning
confidence: 99%
“…Small filters are convolved throughout the input image in a sequence of convolutional layers in order to gather regional patterns and characteristics. The boundaries, surfaces, and contours that make up the graphical world are captured by these features [22]. The feature maps are then down sampled by pooling layers, which reduces the computational burden while preserving crucial information.…”
Section: Feature Extraction By Convolutional Neural Network (Cnn)mentioning
confidence: 99%