2019
DOI: 10.1109/access.2019.2947378
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Focus Image Fusion Based on Residual Network in Non-Subsampled Shearlet Domain

Abstract: In order to obtain a panoramic image which is clearer, and has more layers and texture features, we propose an innovative multi-focus image fusion algorithm by combining with non-subsampled shearlet transform (NSST) and residual network (ResNet). First, NSST decomposes a pair of input images to produce subband coefficients of different frequencies for subsequent feature processing. Then, ResNet is applied to fuse the low frequency subband coefficients, and improved gradient sum of Laplace energy (IGSML) perfor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 26 publications
(12 citation statements)
references
References 44 publications
0
12
0
Order By: Relevance
“…However, it is very time consuming and it is also very hard to design the structure of network and train data sets. Shuaiqi et al in [10] have designed image fusion algorithm by amalgamating NSST and residual network (ResNet). It first decomposes the input images by NSST and then ResNet is applied for low frequency images, and enhanced gradient sum of Laplacian energy is performed on high frequency images.…”
Section: The Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, it is very time consuming and it is also very hard to design the structure of network and train data sets. Shuaiqi et al in [10] have designed image fusion algorithm by amalgamating NSST and residual network (ResNet). It first decomposes the input images by NSST and then ResNet is applied for low frequency images, and enhanced gradient sum of Laplacian energy is performed on high frequency images.…”
Section: The Related Workmentioning
confidence: 99%
“…However, one model cannot capture enough information due to limitations of the system because it is hard to capture focus for all objects due to the limited focal length of camera. Therefore, combining images of different focal lengths for the same scenery by varying the focal length is known as multi-focus image fusion [8][9][10][11]. Similarly, CT and MRI images are used to diagnose many medical conditions This work is licensed under a Creative Commons Attribution 4.0 License.…”
Section: Introductionmentioning
confidence: 99%
“…Due to the limitation of the depth of field in the optical lens, objects at different distances in the same scene cannot be fully focused by cameras. The area within the depth of field is usually a sharp focus area, while the area outside the depth of field is usually a blurry defocus area [1]. Multi-focus image fusion technology is used to extract different focus areas from multiple images in the same scene to synthesize a clear image.…”
Section: Introductionmentioning
confidence: 99%
“…To a certain extent, the position of an image pixel can represent the spatial position of a target object. Methods that processing pixels directly are called methods based on spatial domain, while methods based on transform domain processing transformed coefficients after changing pixels into another feature domain by some filters or mathematical transformations such as low-pass filter, Fourier transform, wavelet transform and so on [7]- [10]. The most widely used methods based on spatial domain are non-local block method, edgepreserving method and so on.…”
Section: Introductionmentioning
confidence: 99%
“…With the rise and development of multi-scale transforms (MSTs), they overcome the shortcoming of pyramid transforms which always generate redundant data. As a result, MSTs such as Curvelet, Contourlet and Shearlet, have been widely used in image fusion [7], [14], [15]. For this kind of image fusion methods, source images are processed by different image transforms at first.…”
Section: Introductionmentioning
confidence: 99%