2015
DOI: 10.48550/arxiv.1506.02351
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stacked What-Where Auto-encoders

Abstract: We present a novel architecture, the "stacked what-where auto-encoders" (SWWAE), which integrates discriminative and generative pathways and provides a unified approach to supervised, semi-supervised and unsupervised learning without relying on sampling during training. An instantiation of SWWAE uses a convolutional net (Convnet) ) to encode the input, and employs a deconvolutional net (Deconvnet) (Zeiler et al. ( 2010)) to produce the reconstruction. The objective function includes reconstruction terms that i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
77
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 83 publications
(80 citation statements)
references
References 18 publications
3
77
0
Order By: Relevance
“…Table 4 shows our results on three of these ten folds. The result of SWWAE and CC-GAN are cited from Zhao et al (2015) and Denton et al (2016) respectively. We achieve better results compared to numerous methods.…”
Section: Svhn and Stl-10mentioning
confidence: 99%
“…Table 4 shows our results on three of these ten folds. The result of SWWAE and CC-GAN are cited from Zhao et al (2015) and Denton et al (2016) respectively. We achieve better results compared to numerous methods.…”
Section: Svhn and Stl-10mentioning
confidence: 99%
“…The training sets of ImageNet-10 and ImageNet-Dogs [9], which are subsets of ImageNet [31], are used for evaluation. In the manner of the researches [16,21,24] which impose hyperbolic geometry on the activations of neural networks, we used the activations of PICA [23], one of the most recent models devel- [74] 0.274 0.151 0.076 0.111 0.038 0.013 AC [19] 0.242 0.138 0.067 0.139 0.037 0.021 NMF [6] 0.230 0.132 0.065 0.118 0.044 0.016 AE [3] 0.317 0.210 0.152 0.185 0.104 0.073 CAE [37] 0.253 0.134 0.068 0.134 0.059 0.022 SAE [43] 0.325 0.212 0.174 0.183 0.112 0.072 DAE [64] 0.304 0.206 0.138 0.190 0.104 0.078 DCGAN [50] 0.346 0.225 0.157 0.174 0.121 0.078 DeCNN [73] 0.313 0.186 0.142 0.175 0.098 0.073 SWWAE [75] 0.323 0.176 0.160 0.158 0.093 0.076 VAE [26] 0.334 0.193 0.168 0.179 0.107 0.079 JULE [71] 0.300 0.175 0.138 0.138 0.054 0.028 DEC [68] 0.381 0.282 0.203 0.195 0.122 0.079 DAC [9] 0.527 0.394 0.302 0.275 0.219 0.111 DDC [8] 0.577 0.433 0.345 ---DCCM [67] 0 oped for deep image clustering. After obtaining activations from the pre-trained networks of PICA, we built the graph by mutual k nearest neighbors between activations.…”
Section: Image Clusteringmentioning
confidence: 99%
“…After obtaining activations from the pre-trained networks of PICA, we built the graph by mutual k nearest neighbors between activations. Then, both the activations and the graph were used as inputs of HGCAE-P. Extensive baselines and state-of-the-art image clustering methods [35,74,19,6,3,37,43,64,50,73,75,26,71,68,9,8,67,23] were compared. Furthermore, we also trained two auto-encoder models, GAE [27], and hyperbolic auto-encoder (HAE) whose layers are hyperbolic feed-forward layers [16].…”
Section: Image Clusteringmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, RL has been used for different path planning tasks such as navigation [33,34,35,36,37,38,39], localization [40,41], and mapping [42,43]. Rl has also been applied for coverage in different contexts.…”
Section: Introductionmentioning
confidence: 99%