2020
DOI: 10.1049/iet-ipr.2019.0883
|View full text |Cite
|
Sign up to set email alerts
|

Multi‐focus image fusion with Siamese self‐attention network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 38 publications
0
11
0
Order By: Relevance
“…In this scenario, Liu et al [26] first employ simple convolutional layers to construct the encoder. A similar way is also utilized in MLFCNN [27], ECNN [29], SSAN [37], FuseGAN [41], SESFFuse [38], MLCNN [28] and IFCNN [33]. Differently, the attention mechanism are further borrowed to enhance the capacity of feature extraction in [37,38], whereas [4] considers the gradient as an extra input except for source images.…”
Section: Network Backbone For Mfifmentioning
confidence: 99%
See 1 more Smart Citation
“…In this scenario, Liu et al [26] first employ simple convolutional layers to construct the encoder. A similar way is also utilized in MLFCNN [27], ECNN [29], SSAN [37], FuseGAN [41], SESFFuse [38], MLCNN [28] and IFCNN [33]. Differently, the attention mechanism are further borrowed to enhance the capacity of feature extraction in [37,38], whereas [4] considers the gradient as an extra input except for source images.…”
Section: Network Backbone For Mfifmentioning
confidence: 99%
“…The typical residual blocks [30] are employed in [31][32][33], whereas the works [34] leverage the dense blocks [35]. Also, various attention mechanisms have been adopted in [36][37][38]. In contrast to the encoders in these methods, the encoders are not always a fully convolutional network.…”
Section: Introductionmentioning
confidence: 99%
“…Li et al [34] proposed a multifocus image fusion based on the deep regression pair learning (DRPL) model, this method converts the entire image into a binary mask for fusion. Guo et al [35] proposed siamese network based on self-attention mechanism, trying to alleviate the local receptive field limitations of convolution operators. However, in these methods, it is still necessary to label images for supervised training of the network.…”
Section: Related Workmentioning
confidence: 99%
“…Guo et al. [35] proposed siamese network based on self‐attention mechanism, trying to alleviate the local receptive field limitations of convolution operators. However, in these methods, it is still necessary to label images for supervised training of the network.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, convolutional neural networks (CNN) and generative adversarial networks (GAN) based models have been widely concerned for image enhancement tasks [25][26][27]. Due to the excellent ability of feature extraction, CNN-based models have achieved impressive progress in image fusion [28,29], detection [30], and classification [31] tasks. Many CNN-based methods can effectively extract the gradient map and the estimated feature map to reconstruct the enhanced result.…”
Section: Motivation and Preliminarymentioning
confidence: 99%