2017
DOI: 10.1117/1.jei.26.6.063004
|View full text |Cite
|
Sign up to set email alerts
|

Fast filtering image fusion

Abstract: Image fusion aims at exploiting complementary information in multimodal images to create a single composite image with extended information content. An image fusion framework is proposed for different types of multimodal images with fast filtering in the spatial domain. First, image gradient magnitude is used to detect contrast and image sharpness. Second, a fast morphological closing operation is performed on image gradient magnitude to bridge gaps and fill holes. Third, the weight map is obtained from the mu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
22
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(23 citation statements)
references
References 37 publications
0
22
0
1
Order By: Relevance
“…In order to prove the superiority of the proposed method, we compare it with 13 other image fusion methods here. The comparative methods include the adaptive sparse representation (ASR) [ 35 ], the convolutional sparse representation (CSR) [ 36 ], curvelet transform (CVT) [ 37 ], dual-tree complex wavelet transform (DTCWT) [ 8 ], the gradient transfer fusion (GTF) [ 38 ], the hybrid multi-scale decomposition (H-MSD) [ 39 ], the convolutional neural network (CNN) [ 40 ], Laplacian pyramid (LP) [ 5 ], the general framework based on multi-scale transform and sparse representation (MSSR) [ 34 ], the multi-resolution singular value decomposition (MSVD) [ 41 ], nonsubsampled contourlet transform (NSCT) [ 9 ], the visual saliency map and weighted least square optimization (WLS) [ 42 ] and the fast filtering image fusion (FFIF) [ 43 ]. All these methods were given default parameters from in their related papers.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…In order to prove the superiority of the proposed method, we compare it with 13 other image fusion methods here. The comparative methods include the adaptive sparse representation (ASR) [ 35 ], the convolutional sparse representation (CSR) [ 36 ], curvelet transform (CVT) [ 37 ], dual-tree complex wavelet transform (DTCWT) [ 8 ], the gradient transfer fusion (GTF) [ 38 ], the hybrid multi-scale decomposition (H-MSD) [ 39 ], the convolutional neural network (CNN) [ 40 ], Laplacian pyramid (LP) [ 5 ], the general framework based on multi-scale transform and sparse representation (MSSR) [ 34 ], the multi-resolution singular value decomposition (MSVD) [ 41 ], nonsubsampled contourlet transform (NSCT) [ 9 ], the visual saliency map and weighted least square optimization (WLS) [ 42 ] and the fast filtering image fusion (FFIF) [ 43 ]. All these methods were given default parameters from in their related papers.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…If one bit of the input image is altered, the NPCR and UACI of traditional methods, in CCAES [41], CDCP [42], and CHC [43] are close to hypothetical values. DNA-based methods, C-DNA [44] and HC-DNA [45], have a better noise attack performance than the previous works. Furthermore, the values are compared against the critical values as in [46,47].…”
mentioning
confidence: 85%
“…So, the presented cryptosystem attains high performance by getting NPCR values and UACI values near to their hypothetical values. [41] 99.5697 Successful Successful Successful CDCP [42] 100 Successful Successful Successful CHC [43] 99.6605 Successful Successful Successful C-DNA [44] 15.25 × 10 −4 NA NA NA HC-DNA [45] 59.7406 NA NA NA [41] 33.4767 Successful Successful Successful CDCP [42] 33.5752 Successful Successful Successful CHC [43] 33.4263 Successful Successful Successful C-DNA [44] 8.97 × 10 −6 NA NA NA HC-DNA [45] 25.0487 NA NA NA…”
mentioning
confidence: 99%
“…They are pixel level, feature level and decision level. Successful fusion methods based on morphological operators are discussed in [6][7].…”
Section: Introductionmentioning
confidence: 99%