2022
DOI: 10.1016/j.inffus.2022.07.013
|View full text |Cite
|
Sign up to set email alerts
|

UIFGAN: An unsupervised continual-learning generative adversarial network for unified image fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(5 citation statements)
references
References 42 publications
0
4
0
Order By: Relevance
“…In this network, a full-size jump-connected generator is applied to extract shallow features, and the discriminator uses two Markov discriminators to fully retain the valid information in the infrared and visible images by playing adversarial games with the generator. In addition, a novel intensity masking generative adversarial network (IM GAN) [ 26 ] and an unsupervised continual-learning generative adversarial network (UIFGAN) [ 27 ] were designed to complement multimodal image information, whereas, it fails to integrate the extracted features efficiently. Xu et al [ 12 ] introduced attention mechanisms to the fusion network for feature extraction, while Liu [ 28 ] proposed an Attention-guided and Wavelet-constrained Generative Adversarial Network for infrared and visible image fusion(AWFGAN) model based on Generative Adversarial Nets (GAN), which could better preserve important information of the raw images.…”
Section: Relevant Workmentioning
confidence: 99%
“…In this network, a full-size jump-connected generator is applied to extract shallow features, and the discriminator uses two Markov discriminators to fully retain the valid information in the infrared and visible images by playing adversarial games with the generator. In addition, a novel intensity masking generative adversarial network (IM GAN) [ 26 ] and an unsupervised continual-learning generative adversarial network (UIFGAN) [ 27 ] were designed to complement multimodal image information, whereas, it fails to integrate the extracted features efficiently. Xu et al [ 12 ] introduced attention mechanisms to the fusion network for feature extraction, while Liu [ 28 ] proposed an Attention-guided and Wavelet-constrained Generative Adversarial Network for infrared and visible image fusion(AWFGAN) model based on Generative Adversarial Nets (GAN), which could better preserve important information of the raw images.…”
Section: Relevant Workmentioning
confidence: 99%
“…In terms Neck networks, the target detection network adds an FPN structure between the Backbone and the last Head output layer. The Head output layer has the same anchor frame mechanism as the YOLOv4 [15], the main improvement is about loss function during training, GIOU_Loss, and the DIOU_nms of the prediction box filtering.…”
Section: Yolov5 Detectionmentioning
confidence: 99%
“…Reference [ [18] , [19] , [20] ] captured the multilevel features of the source images via residual learning. Moreover, modern GAN-based approaches [ [21] , [22] , [23] , [24] , [25] , [26] , [27] , [28] , [29] , [30] ] exploit multi-granularity convolution kernels of the same feature level, yielding different receptive fields and in turn improving fusion performance. For example, each network layer of the feature extractor in Refs.…”
Section: Introductionmentioning
confidence: 99%
“…For example, each network layer of the feature extractor in Refs. [ [21] , [22] , [23] , [24] ] utilized convolution kernels of different sizes to extract useful information from source images. Li [ 25 , 26 ] introduced multi-grained attention network to enable the fusion model to perceive the target region or detail information of the source image from multiple scales.…”
Section: Introductionmentioning
confidence: 99%