2015
DOI: 10.1007/978-3-319-24465-5_8
|View full text |Cite
|
Sign up to set email alerts
|

Diversity-Driven Widening of Hierarchical Agglomerative Clustering

Abstract: Abstract. In this paper we show that diversity-driven widening, the parallel exploration of the model space with focus on developing diverse models, can improve hierarchical agglomerative clustering. Depending on the selected linkage method, the model that is found through the widened search achieves a better silhouette coefficient than its sequentially built counterpart.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
1
1

Relationship

5
0

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…The widening framework terms this Top-k-widening, i.e., M i+1 = s T op−k (r(M i )) : |M i+1 | = k. WIDENING begins to widen the search paths beyond a simple greedy mechanism when diversity is brought into play. The notion of diversity can be implemented in either the refining step as in [24,25] or in the selection step as in [11,12]. Given a diverse refinement operator, r ∆ (•), as in [24,25], where a diversity function, ∆, is imposed on the output, DIVERSE TOP-K WIDENING is described by…”
Section: Wideningmentioning
confidence: 99%
See 1 more Smart Citation
“…The widening framework terms this Top-k-widening, i.e., M i+1 = s T op−k (r(M i )) : |M i+1 | = k. WIDENING begins to widen the search paths beyond a simple greedy mechanism when diversity is brought into play. The notion of diversity can be implemented in either the refining step as in [24,25] or in the selection step as in [11,12]. Given a diverse refinement operator, r ∆ (•), as in [24,25], where a diversity function, ∆, is imposed on the output, DIVERSE TOP-K WIDENING is described by…”
Section: Wideningmentioning
confidence: 99%
“…Not Faster." Although the demonstrated examples, such as WIDENED KRIMP [24], WIDENED HIERARCHICAL CLUS-TERING [11], WIDENED BAYESIAN NETWORKS [25] and BUCKET SELECTION [12] have been able to find superior solutions, i.e., "better," they have been unable to demonstrate this ability in a run-time that is comparable to the standard versions of the greedy algorithms. "Not faster" is not intended to mean "slower.…”
Section: Introductionmentioning
confidence: 99%
“…For set covering Ivanova et al [9] use the Jaccard coefficient, Sampson et al use the Frobenius Norm of the difference of the graphs' Laplacians to compare Bayesian networks [12] and an optimization based on p-dispersion-min-sum for KRIMP [13]. Fillbrunn et al [7] compare incomplete hierarchical clustering trees by using the Robinson-Foulds metric. Most selection strategies presented in those publications are either computationally too expensive to be feasible for use in greedy algorithms due to them having to sort the models or build distance matrices, or they require extensive communication between workers.…”
Section: Related Workmentioning
confidence: 99%
“…Nevertheless, widening has shown very promising results for problems that have a suitable model-dependent distance measure. Examples of such problems include the set cover problem [9], KRIMP [12], Bayesian networks [13], and hierarchical agglomerative clustering [7].…”
Section: Introductionmentioning
confidence: 99%
“…This enables the system as a whole to avoid local optima and potentially find better models than the greedy learning algorithm would otherwise find. Previous work [13,29] has demonstrated its viability on real world algorithms. This work builds on that with an application to the superexponentially-sized [28] hypothesis space of learning Bayesian Networks.…”
Section: Introductionmentioning
confidence: 99%