2020
DOI: 10.1016/j.ijleo.2020.165120
|View full text |Cite
|
Sign up to set email alerts
|

Infrared and visible image fusion with supervised convolutional neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(10 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…In [32], Li et al have used ResNet and zero-phase component analysis, which achieve the good fusion performances. More generally, many convolutional neural networks (CNN) based methods were proposed for the IR and visible image fusion [33][34][35][36]. Unlike CNN and ResNet models, Ma et al [37] have adopted DDcGAN (Dual-discriminator Conditional Generative Adversarial Network) to attain the fusion outputs with the enhanced targets, which facilitates the understandings of scenes for humans.…”
Section: Related Workmentioning
confidence: 99%
“…In [32], Li et al have used ResNet and zero-phase component analysis, which achieve the good fusion performances. More generally, many convolutional neural networks (CNN) based methods were proposed for the IR and visible image fusion [33][34][35][36]. Unlike CNN and ResNet models, Ma et al [37] have adopted DDcGAN (Dual-discriminator Conditional Generative Adversarial Network) to attain the fusion outputs with the enhanced targets, which facilitates the understandings of scenes for humans.…”
Section: Related Workmentioning
confidence: 99%
“…The feature dimension is not proportional to the features it describes. The increase of the dimension may lead to the increase of the classification time and the invalid dimension reduced accuracy due to interference [15][16].…”
Section: Neuron Modelingmentioning
confidence: 99%
“…Li et al [ 14 ] decomposed the source images into the basic part and the detailed part, respectively, and then directly used the weighted average method to fuse the basic part, and used the deep learning framework to extract features for the detailed part, and finally reconstruct the fused image. In addition to extracting the feature information, Wen-Bo An et al [ 15 ] constructed a supervised convolutional network to fully extract the complementary information of infrared and visible images, and the obtained fusion image better retained the details in the original image. In addition, end-to-end image fusion methods are also developing continuously.…”
Section: Introductionmentioning
confidence: 99%