2023
DOI: 10.1016/j.inffus.2022.12.005
|View full text |Cite
|
Sign up to set email alerts
|

Boosting target-level infrared and visible image fusion with regional information coordination

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 22 publications
(6 citation statements)
references
References 38 publications
0
5
0
Order By: Relevance
“…AT‐GAN 2 proposes a generative adversarial network (GAN) with intensity attention modules and semantic transition modules to explore key information in infrared and visible modals. Han et al 1 propose to achieve targeted level image with a scene texture attention module. Moreover, some research has been down to facilitate high‐level vision tasks like object detection and semantic segmentation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…AT‐GAN 2 proposes a generative adversarial network (GAN) with intensity attention modules and semantic transition modules to explore key information in infrared and visible modals. Han et al 1 propose to achieve targeted level image with a scene texture attention module. Moreover, some research has been down to facilitate high‐level vision tasks like object detection and semantic segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Detecting objects in fused images of visible and infrared modals is a significant task for many applications like traffic surveillance and military reconnaissance. However, in this area, most of research focus on proposing better image fusion methods, 1,2 or designing better infrared object detection. 3,4 In other words, the significance of object detection on fusion images is underrated.…”
Section: Introductionmentioning
confidence: 99%
“…Conversely, models trained on generic datasets do not perform well on content from a particular dataset. For instance, the proposed evaluation methods, based on human visual perceptual properties, have not been effective in evaluating color fusion image databases, 6 8 which is largely due to the uniqueness of dual-band color fusion images themselves 9 . Furthermore, many deep learning evaluation algorithms 10 14 have proven to be quite effective in the field of no-reference image quality.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, the proposed evaluation methods, based on human visual perceptual properties, have not been effective in evaluating color fusion image databases, [6][7][8] which is largely due to the uniqueness of dual-band color fusion images themselves. 9 Furthermore, many deep learning evaluation algorithms [10][11][12][13][14] have proven to be quite effective in the field of no-reference image quality. However, it is difficult to achieve desirable results for color harmony assessment of dual-band color fused images.…”
Section: Introductionmentioning
confidence: 99%
“…With the development of deep learning, weakly supervised deep learning networks have been used more frequently in image fusion methods. These methods can be roughly summarized as convolutional-neural-network-based fusion methods [11][12][13][14][15] and generative-adversarial-network-based fusion methods [1,2,[16][17][18][19].…”
Section: Introductionmentioning
confidence: 99%