2022
DOI: 10.1109/tnnls.2020.3045932
|View full text |Cite
|
Sign up to set email alerts
|

Agglomerative Neural Networks for Multiview Clustering

Abstract: Conventional multi-view clustering methods seek for a view consensus through minimizing the pairwise discrepancy between the consensus and subviews. However, the pairwise comparison cannot portray the inter-view relationship precisely if some of the subviews can be further agglomerated. To address the above challenge, we propose the agglomerative analysis to approximate the optimal consensus view, thereby describing the subview relationship within a view structure. We present Agglomerative Neural Network (ANN)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…If no cluster number is specified, COMMO will automatically determine the number of clusters using the k-nearest neighbor algorithm (see Supplementary Section S1.1 ). Second, eight clustering methods are used to identify gene modules, including FLAME (Fuzzy clustering by Local Approximation of Memberships) ( Fu and Medico 2007 ), K-means ( Timmerman et al 2013 ), SOM (self-organizing mapping) ( Wang et al 2002 ), spectral clustering ( Zhang et al 2021 ), Agglomerative ( Liu et al 2022b ), Hclust ( Bu et al 2022 ), NMF (non-negative matrix factorization) ( Liefeld et al 2023 ), and ICA (independent component analysis) ( Hyvärinen 2013 ). Detailed descriptions of these eight methods can be found in the Supplementary Materials .…”
Section: Methodsmentioning
confidence: 99%
“…If no cluster number is specified, COMMO will automatically determine the number of clusters using the k-nearest neighbor algorithm (see Supplementary Section S1.1 ). Second, eight clustering methods are used to identify gene modules, including FLAME (Fuzzy clustering by Local Approximation of Memberships) ( Fu and Medico 2007 ), K-means ( Timmerman et al 2013 ), SOM (self-organizing mapping) ( Wang et al 2002 ), spectral clustering ( Zhang et al 2021 ), Agglomerative ( Liu et al 2022b ), Hclust ( Bu et al 2022 ), NMF (non-negative matrix factorization) ( Liefeld et al 2023 ), and ICA (independent component analysis) ( Hyvärinen 2013 ). Detailed descriptions of these eight methods can be found in the Supplementary Materials .…”
Section: Methodsmentioning
confidence: 99%
“…ProtT5-XL-U50 (hereafter ProtT5) 17 and Mole-BERT 20 , pre-trained language models, were used to characterize the enzymes and substrates, respectively. One-hot encoding 38,39 was used to encode the organism, and radial basis function (RBF) 40,41 was used to encode pH and temperature. The extracted feature vectors were then concatenated for use as inputs to the downstream CGC multi-task model.…”
Section: Mpek Frameworkmentioning
confidence: 99%
“…This paper validates the AP performance by a comprehensive comparison with nine other prominent unsupervised clustering algorithms' performance. These clustering algorithms include distance-based methods such as KM [20] and MBKM [21], density-based techniques such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [22] and Ordering Points to Identify the Clustering Structure (OPTICS) [23], and hierarchical methods such as Agglomerative Hierarchical Clustering (AHC) [24] and Balanced Iterative Reducing and Clustering using Hierarchies (BRICH) [25]. Additionally, model-based Gaussian Mixture Models (GMM) algorithms [26], [27], kernel-based Mean Shift (MS) [28], and SC [29] are evaluated.…”
Section: Affinity Propagation Clustering Modelmentioning
confidence: 99%