2017
DOI: 10.1007/s11222-017-9742-x
|View full text |Cite
|
Sign up to set email alerts
|

A reweighting approach to robust clustering

Abstract: An iteratively reweighted approach for robust clustering is presented in this work. The method is initialized with a very robust clustering partition based on an high trimming level. The initial partition is then refined to reduce the number of wrongly discarded observations and substantially increase efficiency. Simulation studies and real data examples indicate that the final clustering solution is both robust and efficient, and naturally adapts to the true underlying contamination level.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(19 citation statements)
references
References 34 publications
0
19
0
Order By: Relevance
“…As an open point for further research, an automatic procedure for selecting reasonable values for the labelled and unlabelled trimming levels, along the lines of Dotto et al (2018), is under study. Additionally, a robust wrapper variable selection for dealing with high-dimensional problems could be useful for further enhancing the discriminating power of the proposed methodology.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…As an open point for further research, an automatic procedure for selecting reasonable values for the labelled and unlabelled trimming levels, along the lines of Dotto et al (2018), is under study. Additionally, a robust wrapper variable selection for dealing with high-dimensional problems could be useful for further enhancing the discriminating power of the proposed methodology.…”
Section: Discussionmentioning
confidence: 99%
“…Nevertheless, exploratory tools such as Density-Based Silhouette plot (Menardi, 2011) and trimmed likelihood curves (García-Escudero et al, 2011) could be employed to validate and assess the choice of α l and αu. A more automatic approach, like the one introduced in Dotto et al (2018), could also be adapted to our framework. This, however, goes beyond the scope of the present manuscript, it will nonetheless be addressed in the future.…”
Section: Methodsmentioning
confidence: 99%
“…From the distance function aspect, metric learning is used to learn a robust metric to measure the similarity between two points by taking the outliers into account [5], [20]; L 1 norm models the outliers as the sparse constraint for cluster analysis [6], [21]. From the data aspect, the outliers are assigned few weights during clustering process [22]; low-rank representation treats the data as the clean part and outliers, and constrains the clean part with the lowest rank [7]. From the model fusion aspect, ensemble clustering integrates different partitions into a consensus one to deliver a robust result [23], [24].…”
Section: Robust Clusteringmentioning
confidence: 99%
“…We note that by considering a crude rule like , we are likely not to consider all units belonging to a particular group. Therefore, a successive reweighting step (which, in the spirit of the FS, can be performed adaptively; see also Dotto, Farcomeni, García‐Escudero, & Mayo‐Iscar, ) is necessary for refining the tentative groups that have been found. A preliminary proposal, rooted in an exploratory framework, is described in Atkinson et al (, p. 369).…”
Section: Wild Adaptive Trimming For Robust Cluster Analysismentioning
confidence: 99%