2018 IEEE International Symposium on Information Theory (ISIT) 2018
DOI: 10.1109/isit.2018.8437533
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Coding and Autoencoders

Abstract: In Dictionary Learning one tries to recover incoherent matrices A * ∈ R n×h (typically overcomplete and whose columns are assumed to be normalized) and sparse vectors x * ∈ R h with a small support of size h p for some 0 < p < 1 while having access to observations y ∈ R n where y = A * x * . In this work we undertake a rigorous analysis of whether gradient descent on the squared loss of an autoencoder can solve the dictionary learning problem. The Autoencoder architecture we consider is a R n → R n mapping wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 32 publications
0
16
0
Order By: Relevance
“…The proposed system in [119] used auto-encoder along with convolutional neural network for better result. Some of the systems have used combination of methods for accurate result [113], [116]. The used deep learning method by various systems has been shown in Table IV in a short view.…”
Section: A Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The proposed system in [119] used auto-encoder along with convolutional neural network for better result. Some of the systems have used combination of methods for accurate result [113], [116]. The used deep learning method by various systems has been shown in Table IV in a short view.…”
Section: A Discussionmentioning
confidence: 99%
“…Being an unsupervised neural network, the model is trained without giving the labeled dataset [112]. There are four basic parts considered to develop a model, those are an encoder, bottleneck, decoder, and reconstruction loss [113]- [115].…”
Section: Auto-encoder (Ae) Based Fall Detection Systemsmentioning
confidence: 99%
“…Another candidate will be sparse coding and dictionary learning, which is able to handle complex shape distributions . Neural networks can be used to implement/approximate sparse encoder/decoder by enforcing sparsity . As the shape distribution becomes more complex, it may be challenging to maintain mesh correspondence.…”
Section: Discussionmentioning
confidence: 99%
“…Our focus here is on shallow two-layer autoencoder architectures with shared weights. Conceptually, we build upon previous theoretical results on learning autoencoder networks (Arora et al, 2014a(Arora et al, , 2015cRangamani et al, 2017), and we elaborate on the novelty of our work in the discussion on prior work below.…”
Section: Our Contributionsmentioning
confidence: 98%
“…samples from a high-dimensional distribution parameterized by a generative model, and we train the weights of the autoencoder using ordinary (batch) gradient descent. We consider three families of generative models that are commonly adopted in machine learning: (i) the Gaussian mixture model with well-separated centers (Arora and Kannan, 2005); (ii) the k-sparse model, specified by sparse linear combination of atoms (Spielman et al, 2012); and (iii) the non-negative k-sparse model (Rangamani et al, 2017). While these models are traditionally studied separately depending on the application, all of these model families can be expressed via a unified, generic form: Under these three generative models, and with suitable choice of hyper-parameters, initial estimates, and autoencoder architectures, we rigorously prove that:…”
Section: Our Contributionsmentioning
confidence: 99%