2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00075
|View full text |Cite
|
Sign up to set email alerts
|

Structuring Autoencoders

Abstract: In this paper we propose Structuring AutoEncoders (SAE). SAEs are neural networks which learn a low dimensional representation of data and are additionally enriched with a desired structure in this low dimensional space. While traditional Autoencoders have proven to structure data naturally they fail to discover semantic structure that is hard to recognize in the raw data. The SAE solves the problem by enhancing a traditional Autoencoder using weak supervision to form a structured latent space.In the experimen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
2

Relationship

2
8

Authors

Journals

citations
Cited by 29 publications
(10 citation statements)
references
References 19 publications
0
10
0
Order By: Relevance
“…The method itself is not bound to faces or images at all. We also want to make NNs more interpretable and show the importance of such interpretations by working on both the forced disentanglement presented in this paper and also unsupervised disentanglement as shown with Structuring Autoencoders [28]. We are also interested in making our method non-deterministic, similar to a Markov Chain Neural Network [1] and we want to adapt our method to other data sets in 3D [33].…”
Section: Discussionmentioning
confidence: 99%
“…The method itself is not bound to faces or images at all. We also want to make NNs more interpretable and show the importance of such interpretations by working on both the forced disentanglement presented in this paper and also unsupervised disentanglement as shown with Structuring Autoencoders [28]. We are also interested in making our method non-deterministic, similar to a Markov Chain Neural Network [1] and we want to adapt our method to other data sets in 3D [33].…”
Section: Discussionmentioning
confidence: 99%
“…Many anomaly detection methods are based on generative models, such as autoencoders [24,21,30] and GANs [16], which are optimized to generate the normal data. These approaches detect anomalies by the inability of the generative model to reconstruct them.…”
Section: Generative Modelsmentioning
confidence: 99%
“…Modern variants of AE are able to enforce some distribution in latent space either using the variational inference approach [5,11] or using the auxiliary discriminator network [9]. The work [12] is most close to ours in the sight of autoencoder based representation learning. The authors used an approach which is capable of building informative representations and visualization of the data.…”
Section: Related Workmentioning
confidence: 99%