2008
DOI: 10.1371/journal.pone.0002247
|View full text |Cite
|
Sign up to set email alerts
|

Global Considerations in Hierarchical Clustering Reveal Meaningful Patterns in Data

Abstract: BackgroundA hierarchy, characterized by tree-like relationships, is a natural method of organizing data in various domains. When considering an unsupervised machine learning routine, such as clustering, a bottom-up hierarchical (BU, agglomerative) algorithm is used as a default and is often the only method applied.Methodology/Principal FindingsWe show that hierarchical clustering that involve global considerations, such as top-down (TD, divisive), or glocal (global-local) algorithms are better suited to reveal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(14 citation statements)
references
References 34 publications
0
14
0
Order By: Relevance
“…A similar deconvolution approach previously yielded a resolution of 0.5 nm. [51] We performed a two-stage HCA. [51] We performed a two-stage HCA.…”
Section: Population Deconvolution Based On DL Valuesmentioning
confidence: 99%
See 1 more Smart Citation
“…A similar deconvolution approach previously yielded a resolution of 0.5 nm. [51] We performed a two-stage HCA. [51] We performed a two-stage HCA.…”
Section: Population Deconvolution Based On DL Valuesmentioning
confidence: 99%
“…[17] HCA Matlab programs were used to perform HCA, according to the literature. [51] We performed a two-stage HCA. In the first stage, we evaluated the distance between the two molecules (i and j) according to the formula ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi…”
Section: Population Deconvolution Based On DL Valuesmentioning
confidence: 99%
“…Organizing complex datasets into hierarchies is a common strategy in many fields and hierarchical clustering has been successfully applied to many neurobiologically oriented questions. It is assumed that bottom‐up (agglomerative) hierarchical algorithms have some advantages in identifying local relations in the data, whereas top‐down methods, such as divisive hierarchical algorithms, better capture global patterns (Varshavsky, Horn & Linial ). By using an assumption‐free data‐mining approach with divisive hierarchical algorithms, we identified brain sites of potential importance in mediating alcohol and drug effects that are likely to be relevant to drug reinforcement, and then applied agglomerative clustering to obtain a functional neurochemical neurocircuitry that provides a comprehensive structural framework for large‐scale mathematical modeling.…”
Section: Introductionmentioning
confidence: 99%
“…[82][83][84] Thus, entities within an accordingly identified cluster are more similar, relative to those appearing in other aggregations. Such clusters may e.g.…”
Section: Multivariate Versus Classical Data Analysis To Food Consumptmentioning
confidence: 99%
“…Algorithms for divisive (top-down) hierarchical clustering (DHC), on the other hand, which iteratively split all the N samples into a hierarchy of subclusters, are more computation-intensive but typically create more reliable groups at higher branching levels. [83][84][85] In analogy with standard agglomerative clustering, most DHC methods still create binary (or alternatively a fixed predefined value of) divisions at each branching point. Flexible DHC algorithms allowing multiple branches at each branching level have, however, greater potential to reveal more meaningful groupings that are also easier to interpret.…”
Section: Figure 14 Hierarchical Cluster Dendrogram -The Danish Data mentioning
confidence: 99%