2023
DOI: 10.3390/s23042362
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Learning of Disentangled Representation via Auto-Encoding: A Survey

Abstract: In recent years, the rapid development of deep learning approaches has paved the way to explore the underlying factors that explain the data. In particular, several methods have been proposed to learn to identify and disentangle these underlying explanatory factors in order to improve the learning process and model generalization. However, extracting this representation with little or no supervision remains a key challenge in machine learning. In this paper, we provide a theoretical outlook on recent advances … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 75 publications
0
4
0
Order By: Relevance
“…They are also suited for nonlinear dimensionality reduction, e. g ., for data visualization [8] or data embedding for downstream tasks [9, 10]. Autoencoders enable unsupervised representation learning [11]. For example, they have been used to generate data‐driven molecular vector representations based on SMILES strings [12].…”
Section: Figurementioning
confidence: 99%
“…They are also suited for nonlinear dimensionality reduction, e. g ., for data visualization [8] or data embedding for downstream tasks [9, 10]. Autoencoders enable unsupervised representation learning [11]. For example, they have been used to generate data‐driven molecular vector representations based on SMILES strings [12].…”
Section: Figurementioning
confidence: 99%
“…Disentangled representation learning: Disentangled representation learning 1,2,17,18 refers to the process of extracting a low-dimensional representation of the data that captures the most abstract and independent factors of variation. The intention behind this representation is to improve performance on a variety of downstream tasks such as video prediction.…”
Section: Related Workmentioning
confidence: 99%
“…Comprising two main components—the encoder and the decoder—the autoencoder operates by first reducing the dimensionality of the data, thereby reducing the volume of data and facilitating data compression. Subsequently, the decoder reconstructs the data to closely resemble the input data [ 30 , 31 ]. Leveraging its capacity for dimensionality reduction and reconstruction, autoencoders find widespread application in data compression.…”
Section: Introductionmentioning
confidence: 99%