2023
DOI: 10.1109/tmm.2021.3129609
|View full text |Cite
|
Sign up to set email alerts
|

Semantic-Supervised Infrared and Visible Image Fusion Via a Dual-Discriminator Generative Adversarial Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(18 citation statements)
references
References 53 publications
0
18
0
Order By: Relevance
“…Experimental Settings 1) Datasets: We utilize the color and infrared image pairs from the MSRS [19], RoadSence [2], and M3FD datasets [80] to evaluate the proposed framework. We also compare our method with six state-of-the-art algorithms: FusionGAN [9], SDDGAN [29], GANMcC [20], SDNet [27], U2Fusion [2], and TarDAL [80]. SDNet and U2Fusion are fusion approaches based on CNN architectures, while FusionGAN, SDDGAN, GANMcC and TarDAL are based on generative models and their variants.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Experimental Settings 1) Datasets: We utilize the color and infrared image pairs from the MSRS [19], RoadSence [2], and M3FD datasets [80] to evaluate the proposed framework. We also compare our method with six state-of-the-art algorithms: FusionGAN [9], SDDGAN [29], GANMcC [20], SDNet [27], U2Fusion [2], and TarDAL [80]. SDNet and U2Fusion are fusion approaches based on CNN architectures, while FusionGAN, SDDGAN, GANMcC and TarDAL are based on generative models and their variants.…”
Section: Methodsmentioning
confidence: 99%
“…The existing methods usually convert the visible images stored in three channels (i.e., RGB channels) from RGB space to YCbCr space, and use the Y channel for fusion [27], [2]. After the single-channel fused image is generated, it needs to be converted to a three-channel image through post-processing [28], [29]. Since not all channels are presented in the input data, it is hard to construct the multi-channel distribution and extract multi-channel complementary information, resulting in color distortion.…”
mentioning
confidence: 99%
“…To alleviate this problem, they specifically designed two discriminators to realize fusion balance, and exploited DDcGAN [32] to implement multiresolution fusion tasks. In addition, Zhou et al [33] developed SDDGAN where an information quantity discrimination block was designed to supervise semantic information of source images under the framework of dual-discriminator generative adversarial network. Ma et al [34] translated image fusion into multi-classification constraints, namely GANMcC, which proposed two multi-classification discriminators to generate a more balanced result.…”
Section: B Gan-based Fusion Methodsmentioning
confidence: 99%
“…Therefore, Li et al introduced a multiscale attention mechanism in the GAN-based fusion framework [ 29 ] to encourage the generators and discriminators to focus more on the most distinguishing regions. Moreover, Zhou et al [ 30 ] developed a dual-discriminator generative adversarial network (SDDGAN) where an information quantity discrimination (IQD) block was designed to guide the image fusion progress and supervise semantic information of source images in the fused image. Reference [ 31 ] designed a unified gradient and intensity-discriminator generative adversarial network for gradient and intensity retention in different image-fusion tasks.…”
Section: Related Workmentioning
confidence: 99%