2021
DOI: 10.3390/e24010055
|View full text |Cite
|
Sign up to set email alerts
|

An Overview of Variational Autoencoders for Source Separation, Finance, and Bio-Signal Applications

Abstract: Autoencoders are a self-supervised learning system where, during training, the output is an approximation of the input. Typically, autoencoders have three parts: Encoder (which produces a compressed latent space representation of the input data), the Latent Space (which retains the knowledge in the input data with reduced dimensionality but preserves maximum information) and the Decoder (which reconstructs the input data from the compressed latent space). Autoencoders have found wide applications in dimensiona… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 49 publications
(25 citation statements)
references
References 65 publications
0
25
0
Order By: Relevance
“…Synthetic data are generated using several methods categorized as supervised or unsupervised algorithms. Deep learning (DL) techniques, such as GANs, or variational autoencoders (VAEs), as well as machine learning techniques, such as tree synthesizers or non‐machine learning techniques such as Gaussian copulas, all produce synthetic data modeled from a real dataset 4,16–18 . Machine learning methods, such as GAN models, create networks that are trained on the source “original” data to synthesize data by generating realistic data points similar to the original real data 19 .…”
Section: Overview Of Synthetic Data Methodsmentioning
confidence: 99%
“…Synthetic data are generated using several methods categorized as supervised or unsupervised algorithms. Deep learning (DL) techniques, such as GANs, or variational autoencoders (VAEs), as well as machine learning techniques, such as tree synthesizers or non‐machine learning techniques such as Gaussian copulas, all produce synthetic data modeled from a real dataset 4,16–18 . Machine learning methods, such as GAN models, create networks that are trained on the source “original” data to synthesize data by generating realistic data points similar to the original real data 19 .…”
Section: Overview Of Synthetic Data Methodsmentioning
confidence: 99%
“…For example, a principal components analysis (PCA) plot reduces the high-dimensional (D) data to 2D or 3D, and the data are visualized by plotting the newly obtained feature values on the x-axis and the y-axis, as well as the z-axis (dimensionality reduction) [92][93][94][95][96][97]. In addition, multilayered autoencoders are also used for dimensionality reduction and are a type of deep learning that compress data that are "recoverably" used for learning [98]. For example, when high-dimensional multi-omics data with tens of thousands of feature values are input, these feature values are added together after they are weighted differently, and then a new feature value is generated through nonlinear transformation using an activation function in a hidden layer (encoder).…”
Section: Technological Application In Bigdata and Deep Learningmentioning
confidence: 99%
“…Since autoencoders attempt to train an encoder and decoder with as little loss as possible, they are susceptible to overfitting. Variational Autoencoders address this concern by encoding the input as a distribution over the latent space [36]. After a VAE is trained it can be used to generate new synthetic samples for a dataset.…”
Section: E Generative Adversarial Network (Gan)mentioning
confidence: 99%