2017
DOI: 10.1002/ima.22261
|View full text |Cite
|
Sign up to set email alerts
|

IGM‐based perceptual multimodal medical image fusion using free energy motivated adaptive PCNN

Abstract: Multimodal medical image fusion merges two medical images to produce a visual enhanced fused image, to provide more accurate comprehensive pathological information to doctors for better diagnosis and treatment. In this article, we present a perceptual multimodal medical image fusion method with free energy (FE) motivated adaptive pulse coupled neural network (PCNN) by employing Internal Generative Mechanism (IGM). First, source images are divided into predicted layers and detail layers with Bayesian prediction… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…Conventionally, the values of the PCNN parameters are selected based on successive trials, but recently Xu et al [28] presented different methods related to the selection of the PCNN parameters. Moreover, the other PCNN-based approaches are presented using a adapting linking parameter based on local contrast, entropy (EN), directional gradient, saliency, fractional dimension, local visibility and intensity of pixels or coefficients [16,[28][29][30][31][32]. They exhibited improved fusion performance, however, successive trial-based parameter selection is involved in those approaches.…”
Section: Related Workmentioning
confidence: 99%
“…Conventionally, the values of the PCNN parameters are selected based on successive trials, but recently Xu et al [28] presented different methods related to the selection of the PCNN parameters. Moreover, the other PCNN-based approaches are presented using a adapting linking parameter based on local contrast, entropy (EN), directional gradient, saliency, fractional dimension, local visibility and intensity of pixels or coefficients [16,[28][29][30][31][32]. They exhibited improved fusion performance, however, successive trial-based parameter selection is involved in those approaches.…”
Section: Related Workmentioning
confidence: 99%
“…Compared with a single pixel value in an image, the human visual system is more sensitive to the edges, direction and texture information of the image [38]. Based on this, some definition evaluation metrics including spatial frequency (SF), energy of Laplacian of the image (EOL), SML and so on, are employed to evaluate the fusion results.…”
Section: A Robust Adaptive Dictionary Trainingmentioning
confidence: 99%
“…In the past years, the selection of all free parameters was done based on successive experiments on a regular interval (or hit and trial approach) [5,12,27]. To overcome the limitation of manual selection, some fusion methods are developed based on the adaptive selection of few free parameters using local contrast, entropy, directional gradient, saliency, local visibility and intensity of pixels or coefficients [23,25,28,29,30,31,32]. They showed improved fusion performance, however, successive trials based parameter selection is involved in those approaches.…”
Section: Introductionmentioning
confidence: 99%