2009
DOI: 10.1007/978-3-642-04617-9_19
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Clustering via Kernel Embeddings

Abstract: Abstract. We generalize traditional goals of clustering towards distinguishing components in a non-parametric mixture model. The clusters are not necessarily based on point locations, but on higher order criteria. This framework can be implemented by embedding probability distributions in a Hilbert space. The corresponding clustering objective is very general and relates to a range of common clustering concepts.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(19 citation statements)
references
References 11 publications
0
19
0
Order By: Relevance
“…EMD and other spatially aware (i.e., based on the underlying metric space) distances have been applied to clustering of points in space [5,9]. However, in these cases the clusters of points are treated as distributions and EMD is used to compute/compare these clusters.…”
Section: Related Workmentioning
confidence: 99%
“…EMD and other spatially aware (i.e., based on the underlying metric space) distances have been applied to clustering of points in space [5,9]. However, in these cases the clusters of points are treated as distributions and EMD is used to compute/compare these clusters.…”
Section: Related Workmentioning
confidence: 99%
“…The Maximum Mean Discrepancy (MMD) [35] between the two sets of data with probability distributionsP 1 andP 2 is a metric representative of the distance between the means of those distributions and it has been defined as…”
Section: Maximum Mean Discrepancymentioning
confidence: 99%
“…It has been also shown by Jegelka et al [35] that MMD can be expressed in terms of the dependency between two sets of variables:…”
Section: Maximum Mean Discrepancymentioning
confidence: 99%
See 1 more Smart Citation
“…One can derive HSIC as a measure of (in)dependence between two random variables X and Y using two different approaches: first by computing the Hilbert-Schmidt norm of the cross-covariance operators in RKHSs as shown in [32,33]; or second, by computing maximum mean discrepancy (MMD) of two distributions mapped to a high dimensional space, i.e., computed in RKHSs [67,68]. I believe that this latter approach is more straightforward and hence, use it to describe HSIC.…”
Section: Hilbert Schmidt Independence Criterionmentioning
confidence: 99%