2023
DOI: 10.3390/s23167205
|View full text |Cite
|
Sign up to set email alerts
|

DPACFuse: Dual-Branch Progressive Learning for Infrared and Visible Image Fusion with Complementary Self-Attention and Convolution

Abstract: Infrared and visible image fusion aims to generate a single fused image that not only contains rich texture details and salient objects, but also facilitates downstream tasks. However, existing works mainly focus on learning different modality-specific or shared features, and ignore the importance of modeling cross-modality features. To address these challenges, we propose Dual-branch Progressive learning for infrared and visible image fusion with a complementary self-Attention and Convolution (DPACFuse) netwo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 64 publications
0
2
0
Order By: Relevance
“…These methods can be mainly categorized into deep learning-based approaches and conventional methods. Deep learning methods are becoming increasingly popular in the fusion task due to their ability to extract high-level semantic features [5,7,10,12], but there is still a need for improvement in preserving complex and irregular edges within images. Infrared and visible images, coming from the same scene, inherently share statistical co-occurrent information, such as background and large-scale features.…”
Section: Introductionmentioning
confidence: 99%
“…These methods can be mainly categorized into deep learning-based approaches and conventional methods. Deep learning methods are becoming increasingly popular in the fusion task due to their ability to extract high-level semantic features [5,7,10,12], but there is still a need for improvement in preserving complex and irregular edges within images. Infrared and visible images, coming from the same scene, inherently share statistical co-occurrent information, such as background and large-scale features.…”
Section: Introductionmentioning
confidence: 99%
“…With the substantial value of infrared and visible light image fusion across various applications, there has been significant scholarly interest. The rise of deep learning has spurred numerous fusion methods, including those based on Convolutional Neural Networks [ 9 , 10 , 11 , 12 ], Autoencoders [ 13 , 14 , 15 , 16 ], and Generative Adversarial Networks [ 17 , 18 , 19 , 20 ]. While these methods have achieved commendable results, several challenges remain.…”
Section: Introductionmentioning
confidence: 99%