2023
DOI: 10.1109/tcsvt.2023.3239627
|View full text |Cite
|
Sign up to set email alerts
|

A Cross-Scale Iterative Attentional Adversarial Fusion Network for Infrared and Visible Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(4 citation statements)
references
References 42 publications
0
4
0
Order By: Relevance
“…Due to the powerful nonlinear fitting capabilities, neural networks have been widely applied in infrared and visible image fusion, achieving performance far superior to traditional methods. Currently, the methods of infrared and visible image fusion based on deep learning can generally be divided into four types: CNN-based methods [ 33 , 34 , 35 , 36 ], GAN-based methods [ 37 , 38 , 39 , 40 , 41 ], AE-based [ 1 , 42 , 43 , 44 , 45 ] methods, and transformer-based [ 32 , 46 , 47 , 48 , 49 ] methods. CNN-based methods tend to focus on the design of loss functions, forcing the model to generate images that contain as much information from the source images as possible.…”
Section: Related Workmentioning
confidence: 99%
“…Due to the powerful nonlinear fitting capabilities, neural networks have been widely applied in infrared and visible image fusion, achieving performance far superior to traditional methods. Currently, the methods of infrared and visible image fusion based on deep learning can generally be divided into four types: CNN-based methods [ 33 , 34 , 35 , 36 ], GAN-based methods [ 37 , 38 , 39 , 40 , 41 ], AE-based [ 1 , 42 , 43 , 44 , 45 ] methods, and transformer-based [ 32 , 46 , 47 , 48 , 49 ] methods. CNN-based methods tend to focus on the design of loss functions, forcing the model to generate images that contain as much information from the source images as possible.…”
Section: Related Workmentioning
confidence: 99%
“…where ∇ denotes the laplacian operator, max() refers to the element-wise maximum selection. 4) Total loss function: Our method jointly trains the visible image enhancement module and image fusion module, so the total loss is the weighted sum of all sub-losses mentioned before, which is expressed as: (18) where, λ t denotes the coefficients corresponding to different sub-loss functions. Empirically, we set λ illu = 10, λ col = 6, λ tv = 200, λ str = 1, λ grad = 1.…”
Section: Loss Functionsmentioning
confidence: 99%
“…To comprehensively evaluate the performance of our fusion model, we compare it with nine methods, which include traditional fusion methods (CBF [15], LP [16]), CNN-based methods (IFCNN [1], SeAFusion [31], SEDRFuse [54]), GANbased methods (FusionGAN [34], DDcGAN [35], CrossFuse [55]) and transformer-based methods (SwinFusion [2]). We first conduct the comparison experiments on the TNO dataset, and both infrared and visible images in TNO are gray images.…”
Section: A Experimental Configurationsmentioning
confidence: 99%