2021
DOI: 10.48550/arxiv.2102.10526
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Deep Decomposition Network for Image Processing: A Case Study for Visible and Infrared Image Fusion

Abstract: Image decomposition is a crucial subject in the field of image processing. It can extract salient features from the source image. We propose a new image decomposition method based on convolutional neural network. This method can be applied to many image processing tasks. In this paper, we apply the image decomposition network to the image fusion task. We input infrared image and visible light image and decompose them into three high-frequency feature images and a low-frequency feature image respectively. The t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…Furthermore, in order to prove the fusion effect of the spatial discrepancy calibration module on misaligned datasets, a large number of comparison experiments were carried out on the M3FD dataset with severe misalignment. Several existing excellent works, such as DDFusion [38], DIDFuse [39], TarDAL [22], CFR [25], GAFF [7], CFT [19], MFPT [32], and ProbEn 3 [8], are used to compare with our method. To ensure fairness, the input size of RGB and infrared images is set to 640 × 640, so all comparative experiments in this section did not use our size adaption process.…”
Section: Analysis Of Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, in order to prove the fusion effect of the spatial discrepancy calibration module on misaligned datasets, a large number of comparison experiments were carried out on the M3FD dataset with severe misalignment. Several existing excellent works, such as DDFusion [38], DIDFuse [39], TarDAL [22], CFR [25], GAFF [7], CFT [19], MFPT [32], and ProbEn 3 [8], are used to compare with our method. To ensure fairness, the input size of RGB and infrared images is set to 640 × 640, so all comparative experiments in this section did not use our size adaption process.…”
Section: Analysis Of Resultsmentioning
confidence: 99%
“…A high-level approach is intended to fuse RGB and infrared images into a new picture. Fu et al [38] decomposed RGB and infrared images into multiple sets of high-frequency and lowfrequency features by training a neural network, then added the corresponding features of the two modalities to form a fusion image. Zhao et al [39] implements a fusion network for RGB and infrared images based on an auto-encoder (AE).…”
Section: Related Workmentioning
confidence: 99%