2024
DOI: 10.1016/j.knosys.2024.111658
|View full text |Cite
|
Sign up to set email alerts
|

GTMFuse: Group-attention transformer-driven multiscale dense feature-enhanced network for infrared and visible image fusion

Liye Mei,
Xinglong Hu,
Zhaoyi Ye
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(1 citation statement)
references
References 37 publications
0
1
0
Order By: Relevance
“…The structure of the SPFM is illustrated in Figure 4. Inspired by [29][30][31], the SPFM first utilized the salient semantic map as prior knowledge to weight the input information, and then employed self-attention to achieve context awareness, guiding the model to perceive the spatial distribution of salient objects. Through this process, SPFM was able to allocate weights for different regions of the image, injecting additional semantic information into the region of salient targets on the input feature map, and preserving this crucial information as much as possible.…”
Section: Semantic Perception Fusion Module (Spfm)mentioning
confidence: 99%
“…The structure of the SPFM is illustrated in Figure 4. Inspired by [29][30][31], the SPFM first utilized the salient semantic map as prior knowledge to weight the input information, and then employed self-attention to achieve context awareness, guiding the model to perceive the spatial distribution of salient objects. Through this process, SPFM was able to allocate weights for different regions of the image, injecting additional semantic information into the region of salient targets on the input feature map, and preserving this crucial information as much as possible.…”
Section: Semantic Perception Fusion Module (Spfm)mentioning
confidence: 99%