2006
DOI: 10.1007/11889762_9
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Analysis of Kernel Methods for Statistical Shape Learning

Abstract: Abstract. Prior knowledge about shape may be quite important for image segmentation. In particular, a number of different methods have been proposed to compute the statistics on a set of training shapes, which are then used for a given image segmentation task to provide the shape prior. In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding for doing shape analysis. The s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 14 publications
(27 reference statements)
0
7
0
Order By: Relevance
“…Later, Twining and Taylor (2001) proposed the use of Kernel PCA instead, which they claim to be more general than other methods. Recently, there is a growing interest in using Kernel PCA for implicit shape analysis (Cremers et al, 2003;Dambreville et al, 2008;Rathi et al, 2006). Major reason for this development is that non-linear decompositions can cope with the level-set specific problem mentioned in Section 1.1.2, i.e.…”
Section: Dimensionality Reductionmentioning
confidence: 99%
“…Later, Twining and Taylor (2001) proposed the use of Kernel PCA instead, which they claim to be more general than other methods. Recently, there is a growing interest in using Kernel PCA for implicit shape analysis (Cremers et al, 2003;Dambreville et al, 2008;Rathi et al, 2006). Major reason for this development is that non-linear decompositions can cope with the level-set specific problem mentioned in Section 1.1.2, i.e.…”
Section: Dimensionality Reductionmentioning
confidence: 99%
“…This implies that there is no hope in obtaining a dense training set, even with a loose grid (say N bins for each degree of freedom, which makes N d bins), even with billions of examples. Consequently, methods involving only distances [12] or nearest neighbors [6,13,16] are not likely to be successful for high d, whereas our approach based on transport of …”
Section: Propagating Informationmentioning
confidence: 93%
“…Distance-based algorithms, such as kernel methods, were proposed [12,6] to handle high variability, but at the price of considering only distances between shapes, instead of deformations, thus losing the crucial information they carry. These methods consider training sets as graphs, whose nodes are shapes and whose edges are distances (for a particular metric chosen).…”
Section: Introductionmentioning
confidence: 99%
“…Ker-nel density estimation in feature space was introduced by Cremers et al [15] to incorporate the probability of 2D silhouettes of 3D objects in 2D image segmentation. An overview on related kernel density methods is given by Rathi et al [37]. An abstract distance measure between objects in (different) metric spaces is the Gromov-Hausdorff distance, which allows to compute an isometrically invariant distance measure between shapes.…”
Section: Review Of Related Workmentioning
confidence: 99%