2016
DOI: 10.1016/j.patcog.2015.09.017
|View full text |Cite
|
Sign up to set email alerts
|

Efficient clustering on Riemannian manifolds: A kernelised random projection approach

Abstract: Reformulating computer vision problems over Riemannian manifolds has demonstrated superior performance in various computer vision applications. This is because visual data often forms a special structure lying on a lower dimensional space embedded in a higher dimensional space. However, since these manifolds belong to non-Euclidean topological spaces, exploiting their structures is computationally expensive, especially when one considers the clustering analysis of massive amounts of data. To this end, we propo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 56 publications
(138 reference statements)
0
14
0
Order By: Relevance
“…Despite its excellent performance, the current proposal requires to compute the kernel similarities for each query data to all training data, which could introduce a significant bottleneck in the process. Thus, in the future we would further develop the explicit feature map that only needs a significantly smaller set of training data by adapting techniques described in [3,41].…”
Section: Discussionmentioning
confidence: 99%
“…Despite its excellent performance, the current proposal requires to compute the kernel similarities for each query data to all training data, which could introduce a significant bottleneck in the process. Thus, in the future we would further develop the explicit feature map that only needs a significantly smaller set of training data by adapting techniques described in [3,41].…”
Section: Discussionmentioning
confidence: 99%
“…In addition, it makes use of the specific data to be projected during the training stage (as opposed to standard RP, in which the projection matrix R is completely data-independent and no training stage is required). Later on, this kernelized version of RP was extended and applied to the problem of image clustering [32] . The authors proposed three versions of the algorithm: Kernelised Gaussian Random Projection (KG-RP), Kernelised Orthonormal Random Projection (KORP), and KPCA-based Random Projection (KPCA-RP).…”
Section: Related Workmentioning
confidence: 99%
“…Over the years, various authors [3,32] have explored the possibility of performing random projections from the feature space associated to different kernel functions. This is of interest because of two main reasons: (1) if the JL-lemma is satisfied, the pairwise distances between samples in the kernel feature space will be preserved in the resulting representation, so the low-dimensional projected points will preserve most of the structure from the kernel feature-space; and (2), since Random Projection preserves separability margins [3] , classification problems may become more lineally solvable after the Random Projection form a kernel feature space.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations