2021
DOI: 10.1117/1.oe.60.12.123102
|View full text |Cite
|
Sign up to set email alerts
|

Unified framework based on multiscale transform and feature learning for infrared and visible image fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 0 publications
0
4
0
1
Order By: Relevance
“…This mechanism helps humans suppress useless information and quickly obtain information on key areas. The attention mechanism in computer vision is comparable to humans, focusing on key regions in images [27]. The visual attention mechanism based on deep learning is implemented through a mask mechanism, using weights to mark important features of the image, and forming attention through neural network learning.…”
Section: Artistic Image Style Conversion Based On Introduced Attentio...mentioning
confidence: 99%
“…This mechanism helps humans suppress useless information and quickly obtain information on key areas. The attention mechanism in computer vision is comparable to humans, focusing on key regions in images [27]. The visual attention mechanism based on deep learning is implemented through a mask mechanism, using weights to mark important features of the image, and forming attention through neural network learning.…”
Section: Artistic Image Style Conversion Based On Introduced Attentio...mentioning
confidence: 99%
“…Early deep learning methods [16][17][18][19][20] showcased impressive feature extraction capabilities of neural networks but struggled with effectively extracting multi-scale features. To overcome the need for human intervention in designing fusion strategies, the end-to-end methods [21][22][23] emerged, treating the task as a whole and directly mapping input to output. In image fusion, Ma et al 24 first introduced generative adversarial network (GAN), which adjusts the visible information in a fusion image using a discriminator.…”
Section: Introductionmentioning
confidence: 99%
“…E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 2 1 ; 1 1 7 ; 9 5 ∇F y ði; jÞ ¼ Fði; jÞ − Fði; j þ 1Þ:(21) …”
unclassified
“…Visible (low-light-level) images contain rich scene detail and texture information, infrared images can reflect the temperature information of an object and are less affected by weather and lighting conditions 1 . Visible (low-light-level) and infrared color night vision fusion technology combines the advantages of the two bands to produce false color fusion images with obvious hot targets and rich details 2 . The fusion images are suitable for human observation and have wide applications in military target detection, remote sensing, situation awareness, and many other fields 3 , 4 .…”
Section: Introductionmentioning
confidence: 99%
“…1 Visible (low-light-level) and infrared color night vision fusion technology combines the advantages of the two bands to produce false color fusion images with obvious hot targets and rich details. 2 The fusion images are suitable for human observation and have wide applications in military target detection, remote sensing, situation awareness, and many other fields. 3,4 However, using different color fusion methods to fuse the same set of source images may obtain quite different fusion effects, which directly affects the observer's understanding of the image.…”
Section: Introductionmentioning
confidence: 99%