2013 IEEE Conference on Computer Vision and Pattern Recognition 2013
DOI: 10.1109/cvpr.2013.13
|View full text |Cite
|
Sign up to set email alerts
|

Rolling Riemannian Manifolds to Solve the Multi-class Classification Problem

Abstract: In the past few years there has been a growing interest on geometric frameworks to learn supervised classification models on Riemannian manifolds [31,27]. A popular framework, valid over any Riemannian manifold, was proposed in [31] for binary classification. Once moving from binary to multi-class classification this paradigm is not valid anymore, due to the spread of multiple positive classes on the manifold [27]. It is then natural to ask whether the multi-class paradigm could be extended to operate on a lar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
41
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(41 citation statements)
references
References 24 publications
0
41
0
Order By: Relevance
“…Many non-parametric set modeling methods have also been proposed, including subspace [10], [1], [15], manifold [16], [17], [4], [11], [18], affine hull [5], [7], convex hull [5], and covariance matrix based ones [18], [19], [20]. The method in [10] employs canonical correlation to measure the similarity between two sets.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Many non-parametric set modeling methods have also been proposed, including subspace [10], [1], [15], manifold [16], [17], [4], [11], [18], affine hull [5], [7], convex hull [5], and covariance matrix based ones [18], [19], [20]. The method in [10] employs canonical correlation to measure the similarity between two sets.…”
Section: Introductionmentioning
confidence: 99%
“…In [18], an image set is represented by a covariance matrix and a Riemannian kernel function is defined to measure the similarity between two image sets by a mapping from the Riemannian manifold to a Euclidean space. With the kernel function between two image sets, traditional discriminant learning methods, e.g., linear discriminative analysis [26], partial least squares [27], kernel machines, can be used for image set classification [19], [20]. The disadvantages of covariance matrix based methods include the computational complexity of eigen-decomposition of symmetric positive-definite (SPD) matrices and the curse of dimensionality with limited number of training sets.…”
Section: Introductionmentioning
confidence: 99%
“…Illustration of this algorithms has been done for the ellipsoid equipped with a non-Euclidean metric in [16] and for other Riemannian manifolds in [10]. The same idea has been proposed in [5] to solve multiclass classification problems in the context of pattern recognition.…”
Section: Introductionmentioning
confidence: 99%
“…Unfortunately, the kernel-based approaches cannot scale easily, as the Gram matrix computation is O(n 2 ) where n is the number of data points. Also, it is often quite challenging to kernelise the existing algorithms that do not have known kernelised versions [22]. Furthermore, Nikhil et al demonstrate that clustering data in the RKHS may lead to unexpected results since the clusters obtained in the…”
mentioning
confidence: 99%