2018
DOI: 10.3788/gzxb20184706.0610002
|View full text |Cite
|
Sign up to set email alerts
|

Fusion of Infrared and Visible Images Based on Non-subsampled Contourlet Transform and Intuitionistic Fuzzy Set

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Gauss membership function is used to represent the membership degree of the coefficients, and the final low-frequency image is fused in accordance with the membership degree after defuzzification. The membership and non-membership of are respectively shown as follows [ 33 ]: where represents the average value of , represents the standard deviation. and are Gaussian function adjustment parameters.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Gauss membership function is used to represent the membership degree of the coefficients, and the final low-frequency image is fused in accordance with the membership degree after defuzzification. The membership and non-membership of are respectively shown as follows [ 33 ]: where represents the average value of , represents the standard deviation. and are Gaussian function adjustment parameters.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Based on the different feature extraction and fusion strategies, these methods can be classified into conventional fusion methods and end-to-end deep learning methods. According to the hand-crafted feature decomposition and generation rules, conventional fusion methods mainly consist of multiscale transform-based [10], sparse representation-based [11][12][13], saliency-based [14][15][16][17], fuzzy set-based [18][19][20], and hybridbased [21][22][23] methods. To summarize, conventional methods for image fusion typically comprise three primary stages.…”
Section: Introductionmentioning
confidence: 99%
“…The abovementioned methods pay little attention to a phenomenon: pixel intensities are the feature in infrared thermal radiation messages, while edges and gradients are the feature in textural detail information in visible images. Due to two different characteristics of the image, edge blur and texture blur [27], [28] will occur. To address the abovementioned problem and get clearer fused image, we presented a new blur suppression generative adversarial network architecture.…”
Section: Introductionmentioning
confidence: 99%