Clustering with Bregman divergence has been used in literature to unify centroid‐based parametric clustering approaches and to allow the detection of nonspherical clusters within the data. Although empirically useful, the large sample theoretical aspects of Bregman clustering techniques remain largely unexplored. In this paper, we attempt to bridge the gap between the theory and practice of centroid‐based Bregman hard clustering by providing uniform deviation bounds on the clustering objective. Our theoretical analysis relies on the famous Vapnik–Chervonenkis (VC) theory, which, although has been extensively used in a supervised learning context, remains largely unexplored in finding empirical risks for unsupervised learning scenarios. As opposed to most of the theoretical works on clustering, our framework allows the number of features (p) to vary with the number of observations (n). The strong consistency of the sample cluster centroids, under standard assumptions in the literature, also follows as a corollary under this general framework. Furthermore, we show that the rate of convergence is at most of the order
scriptO)(normallognn, under the standard regularity conditions.
Mean shift is a simple interactive procedure that gradually shifts data points towards the mode which denotes the highest density of data points in the region. Mean shift algorithms have been effectively used for data denoising, mode seeking, and finding the number of clusters in a dataset in an automated fashion. However, the merits of mean shift quickly fade away as the data dimensions increase and only a handful of features contain useful information about the cluster structure of the data. We propose a simple yet elegant feature-weighted variant of mean shift to efficiently learn the feature importance and thus, extending the merits of mean shift to high-dimensional data. The resulting algorithm not only outperforms the conventional mean shift clustering procedure but also preserves its computational simplicity. In addition, the proposed method comes with rigorous theoretical convergence guarantees and a convergence rate of at least a cubic order. The efficacy of our proposal is thoroughly assessed through experimental comparison against baseline and state-of-the-art clustering methods on synthetic as well as real-world datasets.
Despite being a well‐known problem, feature weighting and feature selection are a major predicament for clustering. Most of the algorithms, which provide weighting or selection of features, require the number of clusters to be known in advance. On the other hand, the existing automatic clustering procedures that can determine the number of clusters are computationally expensive and often do not make a room for feature weighting or selection. In this paper, we propose a Gibbs sampling‐based algorithm for the Dirichlet process mixture model, which can determine the number of clusters and can also incorporate a near‐optimal feature weighting. We show that in the limiting case, the algorithm approaches a hard clustering procedure, which resembles minimization of an underlying clustering objective similar to weighted k‐means with an additional forfeit for the number of clusters and hence retains the simplicity of the Llyod's heuristics. To avoid the trivial solution of the resulting linear program, we include an additional entropic penalty on the feature weights. The proposed algorithm is tested on several synthetic and real‐life datasets. Through a detailed experimental analysis, we demonstrate the competitiveness of our proposal against the baseline as well as state‐of‐the‐art procedures for centre‐based high‐dimensional clustering.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.