2022
DOI: 10.1007/s00422-022-00937-6
|View full text |Cite
|
Sign up to set email alerts
|

Autoencoders reloaded

Abstract: In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417–441, 1993); Jolliffe (Principal component … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 49 publications
0
1
0
Order By: Relevance
“…An auto encoder decoder is a deep learning method that uses unsupervised learning to conduct encoding and decoding [31] [32]. Similar to an artificial neural network, it consists of input, hidden, and output layers [33].…”
Section: Auto Encoder Decodermentioning
confidence: 99%
“…An auto encoder decoder is a deep learning method that uses unsupervised learning to conduct encoding and decoding [31] [32]. Similar to an artificial neural network, it consists of input, hidden, and output layers [33].…”
Section: Auto Encoder Decodermentioning
confidence: 99%
“…The underlying architecture consists of autoencoding layers which first encode (compress) the data in a lowerdimensional latent space after which the data is decoded (reconstructed) into its original dimension. Here, the aim of the encoder phase is not only to reduce the data dimensionality, but to compress the data by removing redundant information while at the same time keeping the most important information relevant for the research question in this reduced representation 37 . While a traditional AE minimizes the reconstruction error during training and results in a non-regularized latent space (the decoder cannot be used to generate valid input data from vectors sampled from the latent space), a VAE is stochastic and learns the parameters of the data distribution, i.e.…”
Section: Introductionmentioning
confidence: 99%