2017 25th European Signal Processing Conference (EUSIPCO) 2017
DOI: 10.23919/eusipco.2017.8081588
|View full text |Cite
|
Sign up to set email alerts
|

A new algorithm for training sparse autoencoders

Abstract: Abstract-Data representation plays an important role in performance of machine learning algorithms. Since data usually lacks the desired quality, many efforts have been made to provide a more desirable representation of data. Among many different approaches, sparse data representation has gained popularity in recent years. In this paper, we propose a new sparse autoencoder by imposing the power two of smoothed L0 norm of data representation on the hidden layer of regular autoencoder. The square of smoothed L0 … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 20 publications
0
9
0
Order By: Relevance
“…Learning the encoded representation can be achieved by restricting the number of nodes in the encoding layers as done in undercomplete autoencoders [42]. Overcomplete autoencoders learn structure by imposing other regularization constraints on the encoding layer such as sparsity as in sparse autoencoders [44], or addition of noise as in denoising autoencoders [45]. Convolutional autoencoders (CAE) exploit spatial relationships in data by weight sharing [46].…”
Section: Autoencodersmentioning
confidence: 99%
“…Learning the encoded representation can be achieved by restricting the number of nodes in the encoding layers as done in undercomplete autoencoders [42]. Overcomplete autoencoders learn structure by imposing other regularization constraints on the encoding layer such as sparsity as in sparse autoencoders [44], or addition of noise as in denoising autoencoders [45]. Convolutional autoencoders (CAE) exploit spatial relationships in data by weight sharing [46].…”
Section: Autoencodersmentioning
confidence: 99%
“…where w k is the kernel width, h k is the kernel height, and c p and c p+1 are the number of feature channels in the pth and (p + 1)-th convolutional layers. Each reconstructor R i (•) : R w×h×c → R w×h×c is instantiated with a T -layer deep convolutional autoencoder [68], [69] composed of a parametric encoder, followed by a parametric decoder…”
Section: Private Trainingmentioning
confidence: 99%
“…For each input in the encoder unit, a linear combination of input elements is mapped onto the hidden layer representation using a nonlinear function. 36 The input values are matched with the values of the hidden layer by using Equation 11 with the definitions below: 37 Similarly, in the decoder section, it attempts to recreate the input by applying a typical nonlinear function to a linear combination of the resulting representational elements.…”
Section: Figure 7 Particle Filter Flowchartmentioning
confidence: 99%