2016 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET) 2016
DOI: 10.1109/wispnet.2016.7566348
|View full text |Cite
|
Sign up to set email alerts
|

Region based Multi-focus Image Fusion using the spectral parameter Variance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…The pooling process takes the maximum of adjacent feature maps to reduce the feature map by a factor of four. The actual output of the sample can be calculated by (11) and (12). Assuming that the current layer is lth layer, Q P is the actual output of the sample.…”
Section: A Convolutional Neural Network Based On Transfer Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The pooling process takes the maximum of adjacent feature maps to reduce the feature map by a factor of four. The actual output of the sample can be calculated by (11) and (12). Assuming that the current layer is lth layer, Q P is the actual output of the sample.…”
Section: A Convolutional Neural Network Based On Transfer Learningmentioning
confidence: 99%
“…However, the results of RID with a single kernel function cannot fully interpret radar signals. Hence, the image-fusion technique proposed in [10] and [11] is applied to combine the various details of radar signal's T-F images. And then, more richer and comprehensive information of fused images is obtained.…”
Section: Introductionmentioning
confidence: 99%
“…Method 5: Pixel‐level fusion scheme based on the region‐based segmentation and spectral variance discussed in Ref. .…”
Section: Resultsmentioning
confidence: 99%
“…Four image pairs out of complete CT‐MR data set is shown in Figure A,B, respectively. For presenting the comparative visual performance of the fused images, some of the methods developed previously are considered as fusion method (FM‐1), FM‐2, FM‐3, FM‐4, (mentioned above as method 4, Method 5, Method 7, and Method 9) respectively, FM‐5, and proposed MMIF method. Their fusion results are shown in Figure C‐H, respectively, and by observing these fused images, it is visualized that the resultant fused images obtained by the proposed MMIF approach have better visual ability along with better contrast and edge information that is supported by the quantitative value of the En, STD, and XEI parameters.…”
Section: Experimentation Detailsmentioning
confidence: 99%