2003
DOI: 10.1117/12.480867
|View full text |Cite
|
Sign up to set email alerts
|

Similarity metrics based on nonadditive entropies for 2D-3D multimodal biomedical image registration

Abstract: Information theoretic similarity metrics, including mutual information, have been widely and successfully employed in multimodal biomedical image registration. These metrics are generally based on the Shannon-Boltzmann-Gibbs definition of entropy. However, other entropy definitions exist, including generalized entropies, which are parameterized by a real number. New similarity metrics can be derived by exploiting the additivity and pseudoadditivity properties of these entropies. In many cases, use of these mea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
27
0

Year Published

2003
2003
2017
2017

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 37 publications
(27 citation statements)
references
References 26 publications
0
27
0
Order By: Relevance
“…Likewise, Wachowiak et al [30] found that Tsallis Entropy in the Normalized Mutual Information (NMI) presented the best MRI intra-modality alignment, with the difference that the lower errors were obtained with a higher q value (q = 1.1). In their study, the authors used the Euclidean Distance and Root Mean Square (RMS) error to determine the Correctness Ratio (Cr), and results showed that Tsallis Entropy generally had better performance (in terms of producing fewer misregistration) than Shannon NMI for q value very close to 1 (q = 0.9, 1.1) [31]. They argue that the poor performance of low q values in their study might be due to the use of the Powell optimization method for registration.…”
Section: Discussionmentioning
confidence: 99%
“…Likewise, Wachowiak et al [30] found that Tsallis Entropy in the Normalized Mutual Information (NMI) presented the best MRI intra-modality alignment, with the difference that the lower errors were obtained with a higher q value (q = 1.1). In their study, the authors used the Euclidean Distance and Root Mean Square (RMS) error to determine the Correctness Ratio (Cr), and results showed that Tsallis Entropy generally had better performance (in terms of producing fewer misregistration) than Shannon NMI for q value very close to 1 (q = 0.9, 1.1) [31]. They argue that the poor performance of low q values in their study might be due to the use of the Powell optimization method for registration.…”
Section: Discussionmentioning
confidence: 99%
“…Parameters much different than unity have rough similarity metric functions, and are difficult to optimize [6,7]. The I R α and I S α metrics, with α slightly greater than 1 (α ∈ [1.1, 2]), were the overall best performers, with about the same efficiency as I. Misregistrations often result from entrapment in local extrema of the registration function, to which local optimization methods are susceptible.…”
Section: Discussionmentioning
confidence: 99%
“…The I R α and I S α metrics, with α slightly greater than 1 (α ∈ [1.1, 2]), were the overall best performers, with about the same efficiency as I. Misregistrations often result from entrapment in local extrema of the registration function, to which local optimization methods are susceptible. Many of the generalized metrics have smoother registration surfaces than MI [7], and are more sensitive to changes in dependence (Fig. 1).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations