2013
DOI: 10.1016/j.compmedimag.2013.08.004
|View full text |Cite
|
Sign up to set email alerts
|

Improving accuracy and efficiency of mutual information for multi-modal retinal image registration using adaptive probability density estimation

Abstract: Mutual information (MI) is a popular similarity measure for performing image registration between different modalities. MI makes a statistical comparison between two images by computing the entropy from the probability distribution of the data. Therefore, to obtain an accurate registration it is important to have an accurate estimation of the true underlying probability distribution. Within the statistics literature, many methods have been proposed for finding the 'optimal' probability density, with the aim of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 68 publications
(39 citation statements)
references
References 36 publications
0
39
0
Order By: Relevance
“…For retinal images, usually methods based on mutual information (Pluim et al, 2003;Legg et al, 2013) have been proposed. Instead of employing all image pixels, certain feature-based methods rely on carefully selected, localized features.…”
Section: Global Vs Local Methodsmentioning
confidence: 99%
“…For retinal images, usually methods based on mutual information (Pluim et al, 2003;Legg et al, 2013) have been proposed. Instead of employing all image pixels, certain feature-based methods rely on carefully selected, localized features.…”
Section: Global Vs Local Methodsmentioning
confidence: 99%
“…N = 29,861 in e06 (Sturges, 1926). Since Sturges' Rule is known to lead to an over-smoothed histogram, especially for large samples, and only considers normal, not skewed distributions (Legg et al, 2013), a higher number was chosen. This also results in a more representative Cumulative Distribution Function (CDF) and leads to preservation of the shape of both density distributions in Figure 9 although they span different intervals.…”
Section: Resultsmentioning
confidence: 99%
“…The best kernel width selection methods include rules of thumb, over smoothing, least squares cross-validation, biased cross-validation, direct plug-in methods, and the smoothed bootstrap. And the widely used rule for approximating the kernel width is the Scott's rule [22,24,32], where h is expressed in (4). However, these strategies depend on the complete sample, which is impracticable in data stream scenario.…”
Section: Kernel Width Selectionmentioning
confidence: 99%