2023
DOI: 10.1016/j.inffus.2023.02.011
|View full text |Cite
|
Sign up to set email alerts
|

Feature dynamic alignment and refinement for infrared–visible image fusion: Translation robust fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(9 citation statements)
references
References 62 publications
0
9
0
Order By: Relevance
“…In the field of computer vision, Visual Transformer (VT) have demonstrated significant capabilities across various tasks such as image Fusion [49,50]), pedestrian detection [51], particularly excelling in multispectral detection tasks [52][53][54][55] where they can focus on important features scattered across different spectral bands. Their self-attention mechanism's ability to model long-range dependencies and capture global context is especially valuable.…”
Section: Vision Transformermentioning
confidence: 99%
“…In the field of computer vision, Visual Transformer (VT) have demonstrated significant capabilities across various tasks such as image Fusion [49,50]), pedestrian detection [51], particularly excelling in multispectral detection tasks [52][53][54][55] where they can focus on important features scattered across different spectral bands. Their self-attention mechanism's ability to model long-range dependencies and capture global context is especially valuable.…”
Section: Vision Transformermentioning
confidence: 99%
“…Currently, attention mechanisms have been extensively employed in many computer tasks, including semantic segmentation (Kirillov et al 2023), image fusion (Li et al 2023) and so on. For the field of correspondence pruning, the introduction of attention mechanisms is beneficial to focus on inlier information and suppress redundant information, but it still necessitates some appropriate modifications.…”
Section: Attention Mechanism In Correspondence Pruningmentioning
confidence: 99%
“…With the development of sensor technology, different types of sensors can provide a variety of modal information for semantic segmentation tasks. The information interaction between RGB and other modalities can improve the performance of multimodal tasks [21,[41][42][43][44][45][46][47][48]. Specifically, Li et al [21,41,42], and Xiao et al [44] improved the quality of infrared and visible image fusion through cross-modal interaction between RGB image and infrared image.…”
Section: Cross-modal Interactionmentioning
confidence: 99%