2020
DOI: 10.1016/j.knosys.2020.106260
|View full text |Cite
|
Sign up to set email alerts
|

Deep clustering by maximizing mutual information in variational auto-encoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(19 citation statements)
references
References 22 publications
0
19
0
Order By: Relevance
“…Traditional unsupervised learning techniques such as Kmeans [12] cannot be easily applied in high-dimensional data due to a catastrophe of dimension [33]. Recent work on unsupervised deep learning is divided into the following notable aspects: generative learning, e.g., generating adversarial networks (GAN) [34], variational autoencoder (VAE) [35]) and contrastive learning such as simple framework for contrastive learning of visual representations (SimCLR) [36] and simple siamese (SimSiam) [37].…”
Section: Unsupervised Deep Learningmentioning
confidence: 99%
“…Traditional unsupervised learning techniques such as Kmeans [12] cannot be easily applied in high-dimensional data due to a catastrophe of dimension [33]. Recent work on unsupervised deep learning is divided into the following notable aspects: generative learning, e.g., generating adversarial networks (GAN) [34], variational autoencoder (VAE) [35]) and contrastive learning such as simple framework for contrastive learning of visual representations (SimCLR) [36] and simple siamese (SimSiam) [37].…”
Section: Unsupervised Deep Learningmentioning
confidence: 99%
“…This necessitates dimensionality reduction methods, which the authors do as follows. First, Xu et al [28] develop a Deep Clustering via Variational Auto-Encoder (DC-VAE) of mutual information maximization. Deep clustering refers to the process of guiding clustering methods jointly with automatic learning representation from the high-semantic and high-dimensional data via deep neural networks.…”
Section: Pre-processing and Dimension Reductionmentioning
confidence: 99%
“…Most papers report using only one dataset (14 papers), but the amount varies from 1 to 45 datasets. The papers that perform experiments on more than three datasets are: Silva, Almeida, and Yamakami [24] (45 datasets), Liu et al [15] (12 datasets), Liu and Guo [36] and Tellez et al [18] (7 datasets), Zheng and Zheng [37] and Xu et al [28] (6 datasets), and Conover et al [21] (5 datasets), Shi and Lu [38] and Wei et al [25] (4 datasets). We also identify that eight papers analyze three datasets, and five papers analyze two datasets.…”
Section: Characteristics Of the Datasets Used In The Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Machine learning methods have been used as the powerful tools for feature detection/extraction and trend estimation/forecasting in the distributed sensor network applications. Supervised machine learning methods, such as neural network (NN), 118 convolutional neural network (CNN), 1935 and recurrent neural network (RNN), 3647 can be applied to the prediction and classification, while unsupervised machine learning methods, such as restricted Boltzmann machine (RBM), 48 deep belief network (DBN), deep Boltzmann machine (DBM), 49,50 auto-encoder (AE), 5156 and denoising auto-encoder (DAE), can be utilized for the data denoising and model generalization. Furthermore, reinforcement learning methods, including generative adversarial networks (GANs) 5760 and deep Q-networks (DQNs), are widely used in tools for generative networks and discriminative networks to optimize the contesting process in a zero-sum...…”
mentioning
confidence: 99%