Proceedings of the 2004 SIAM International Conference on Data Mining 2004
DOI: 10.1137/1.9781611972740.22
|View full text |Cite
|
Sign up to set email alerts
|

Clustering with Bregman Divergences

Abstract: A wide variety of distortion functions are used for clustering, e.g., squared Euclidean distance, Mahalanobis distance and relative entropy. In this paper, we propose and analyze parametric hard and soft clustering algorithms based on a large class of distortion functions known as Bregman divergences. The proposed algorithms unify centroid-based parametric clustering approaches, such as classical kmeans and information-theoretic clustering, which arise by special choices of the Bregman divergence. The algorith… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

14
1,283
1
11

Year Published

2007
2007
2018
2018

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 848 publications
(1,309 citation statements)
references
References 48 publications
(20 reference statements)
14
1,283
1
11
Order By: Relevance
“…In particular, let G : Θ → R denote a strictly convex twice-differentiable function, the divergence introduced by [7] B G : Θ × Θ → R + is: Bregman divergences are widely used in statistical inference, optimization, machine learning, and information geometry (see e.g. [2,5]). Letting Ψ(·, ·) = B G (·, ·), the mirror descent step defined is:…”
Section: Mirror Descent With Bregman Divergencesmentioning
confidence: 99%
“…In particular, let G : Θ → R denote a strictly convex twice-differentiable function, the divergence introduced by [7] B G : Θ × Θ → R + is: Bregman divergences are widely used in statistical inference, optimization, machine learning, and information geometry (see e.g. [2,5]). Letting Ψ(·, ·) = B G (·, ·), the mirror descent step defined is:…”
Section: Mirror Descent With Bregman Divergencesmentioning
confidence: 99%
“…For the particular, and present, case when the KL-divergence is used as similarity measure, an approximate solution to the clustering problem can be computed with the K-means algorithm in polynomial time [40]. For a general treatment of clustering problems with similarity measures based on Bregman divergences, including the KL-divergence, see [6].…”
Section: Implementation Issuesmentioning
confidence: 99%
“…We use symmetric KL-divergence as the distance measure for both training and querying, since it performs better than L 2 -norm for HoG descriptors [27]. KL-divergence can be incorporated into the k-means clustering framework because it is a Bregman divergence [28]. For more robustness, we use soft-assignment of descriptors to the 3 nearest centroids, as in [25].…”
Section: Database Retrievalmentioning
confidence: 99%