2019
DOI: 10.1109/access.2019.2897320
|View full text |Cite
|
Sign up to set email alerts
|

Poisson Reconstruction-Based Fusion of Infrared and Visible Images via Saliency Detection

Abstract: Saliency-based methods have been widely used in the fusion of infrared (IR) and visible (VIS) images, which can highlight the salient object region and preserve the detailed background information simultaneously. However, most existing methods ignore the salient information in the VIS image or they fail to highlight the boundaries of objects, which makes the final saliency map incomplete and the edges of the object blurred. To address the above-mentioned issues, we propose a novel IR and VIS images' fusion alg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 64 publications
0
6
0
Order By: Relevance
“…If the cruel wolf gets a whiff of prey, Y i > Y L , set Y L = Y i and the cruel wolf will replace alpha wolf to perform the summoning behavior. If Y i > Y L , the cruel wolf will continue attacking until d is ≤ d near ; (3.4) Update the positions of the intelligent wolves that participate in the beleaguering behavior according to formula (22), and then execute the beleaguering behavior;…”
Section: Fusion Schemementioning
confidence: 99%
See 1 more Smart Citation
“…If the cruel wolf gets a whiff of prey, Y i > Y L , set Y L = Y i and the cruel wolf will replace alpha wolf to perform the summoning behavior. If Y i > Y L , the cruel wolf will continue attacking until d is ≤ d near ; (3.4) Update the positions of the intelligent wolves that participate in the beleaguering behavior according to formula (22), and then execute the beleaguering behavior;…”
Section: Fusion Schemementioning
confidence: 99%
“…The main related methods include the Wavelet Transform (WT) [5], Curvelet Transform (CT) [6], Non-Sampled Contourlet Transform (NSCT) [7], Non-Sampled Shearlet Transform (NSST) [8], [9], Directionlet Transform [10], empirical mode decomposition [11], internal generative mechanism [12], multiresolution singular value decomposition [13], Tetrolet Transform (TT) [14], Top-hat transform [15], Sparse Representation (SR) [16] and Total Variation (TV) decomposition method [17]- [19]. The spatial domain method directly extracts useful information from fusion in the spatial domain without a decomposition or reconstruction step and includes the significance fusion method and subspace-based fusion method [8], [20]- [22].…”
Section: Introductionmentioning
confidence: 99%
“…Wherein, the contour information is crucial for characterizing a defect, so the fusion algorithm needs to be able to efficiently preserve not only as much information as possible for each IR reconstruction, but also the edge contour information of the defects in each reconstruction. Then the image fusion algorithm based on guide filtering [31] is used to carry out infrared reconstruction image fusion. In the end, the above method realized the homologous heterogeneous coupling defect IR reconstruction image fusion, heterologous heterogeneous plural quantity IR reconstruction image fusion, and heterologous and homogeneous multi IR reconstruction image fusion.…”
Section: Introductionmentioning
confidence: 99%
“…Many different approaches have been proposed for infrared object tracking such as saliency extraction [9], multiscale patch-based contrast measure and a temporal variance filter [14], feature learning and fusion, reliability weight estimation based on nonnegative matrix factorization [15], Poisson reconstruction and the Dempster-Shafer theory [16], three-dimensional scalar field [17], a double-layer region proposal network (RPN) [18], Siamese convolution network [19], a mixture of Gaussians with modified flux density [20], spatial-temporal total variation regularization and weighted tensor [21], two-stage U-skip context aggregation network [22], histogram similarity map based on the Epanechnikov kernel function [23], quaternion discrete cosine transform [24], non-convex optimization [25], Mexican-hat distribution of pixels [26], and Schatten regularization with reweighted sparse enhancement [27].…”
Section: Introductionmentioning
confidence: 99%