2013
DOI: 10.1007/s11760-013-0585-4
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal image/video fusion rule using generalized pixel significance based on statistical properties of the neighborhood

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…The proposed fusion approach is tested to fuse the complex The fusion results of microscopy image data sets are compared with four state-of-the-art methods and four commercial focus stack fusion tools. The state-of-the-art methods include exposure fusion (EF) [36], complex wavelets for extended depth-of-field (WED) [40], entropy-based fusion method (EBF) [18] and the generalized pixel significance (GPS) based fusion method [41]. In this paper, four commercial fusion tools were accessed, which include extended depth of field (EDF) [42], Helicon focus (HF) [43], PICOLAY (PIC) [44] and the Zerene stacker (ZS) [45].…”
Section: Comparison Of Optical Microscopy Multi-focus Data Setsmentioning
confidence: 99%
“…The proposed fusion approach is tested to fuse the complex The fusion results of microscopy image data sets are compared with four state-of-the-art methods and four commercial focus stack fusion tools. The state-of-the-art methods include exposure fusion (EF) [36], complex wavelets for extended depth-of-field (WED) [40], entropy-based fusion method (EBF) [18] and the generalized pixel significance (GPS) based fusion method [41]. In this paper, four commercial fusion tools were accessed, which include extended depth of field (EDF) [42], Helicon focus (HF) [43], PICOLAY (PIC) [44] and the Zerene stacker (ZS) [45].…”
Section: Comparison Of Optical Microscopy Multi-focus Data Setsmentioning
confidence: 99%
“…The generated multimodal image contains the visible camera's rich appearance information as well as the thermal camera's heat signature information [17][18][19][20][21][22]. The multi-modal images are obtained using multi-resolution schemes [17][18][19][20], local neighborhood statistics [23] and learning methods [24][25][26]. Shah et al [17] generate the multi-modal image using wavelet analysis and contourlet transform.…”
Section: Introduction and Literature Reviewmentioning
confidence: 99%
“…The weights are calculated using the significance of the source pixels. For example, Shah et al [23] use the local neighborhood eigenvalue statistics to calculate the significance of the source pixels. Liu et al [18] propose a multi-resolution fusion scheme to generate the multimodal image.…”
Section: Introduction and Literature Reviewmentioning
confidence: 99%
“…Other fields where image fusion from various data sources (sometimes referred as multimodal image fusion) is an active area of research include video processing [16], [17], biometric identification/classification [18]- [21], image fusion for visualization enhancement [22], industrial inspection [23], among others [24]- [27]. Image fusion methods might be also aimed at solving only one particular type of fusion problem, but current tendency is toward the creation of methodologies that may be able to solve image fusion problems in more than one area (fusion of medical images, or fusion of RGB and thermal images, using the same mathematical framework) [28]- [32].…”
Section: Introductionmentioning
confidence: 99%