2015
DOI: 10.1364/ao.54.004299
|View full text |Cite
|
Sign up to set email alerts
|

Infrared and visible image fusion using multiscale directional nonlocal means filter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…Cross bilateral filtering (CBF) [15], guided filtering, mean-median filtering [23], Gaussian and bilateral filtering [30], isotropic diffusion, and anisotropic diffusion [14] Choose-max [14], average gradient, local energy, and window based [34] Introduces reversal artifacts and halo effects…”
Section: Edge Preserving-based Filtering Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Cross bilateral filtering (CBF) [15], guided filtering, mean-median filtering [23], Gaussian and bilateral filtering [30], isotropic diffusion, and anisotropic diffusion [14] Choose-max [14], average gradient, local energy, and window based [34] Introduces reversal artifacts and halo effects…”
Section: Edge Preserving-based Filtering Methodsmentioning
confidence: 99%
“…For decades, numerous signal processing algorithms have been contemplated in image fusion to extract features such as multi-scale transform-based methods [10,11], sparse-representation-based [12,13], edge-preserving filtering [14,15], neural network [16], subspace-based [17] and hybrid-based methods [3]. During the last five years, deep learning (DL)-based [18] methods have received painstaking attention in image fusion due to their outstanding feature extraction from source images.…”
Section: Introductionmentioning
confidence: 99%
“…The more widely used methods in the field of infrared and visible image fusion can be roughly classified into MST-based methods [4], sparse representation-based methods [5], spatial domain-based methods [6], and deep learning-based methods [7]. At present, the most researched and applied methods are MST-based methods, including wavelet transform [8], Laplacian pyramid transform [9], nonsubsampled shear wave transform [10], and nonsubsampled contourlet transform [11].…”
Section: Introductionmentioning
confidence: 99%
“…However, if the color and brightness of an object concerned are different from the background only slightly, it can be difficult to recognize the object visually, whereas infrared radiation (IR) images captured over the object may help to provide a precise representation of the target. Hence, the fusion of visible and IR images can present more information for both human inspection and computer-based image analysis with one integrated image [3]. Indeed, image fusion has recently been utilized as effective tools in object recognition [4], remote sensing [5], target tracking [6], surveillance [7], and defense applications that require the use of multiple images of a scene.…”
Section: Introductionmentioning
confidence: 99%