2018
DOI: 10.1109/mgrs.2018.2853555
|View full text |Cite
|
Sign up to set email alerts
|

A Review of the Autoencoder and Its Variants: A Comparative Perspective from Target Recognition in Synthetic-Aperture Radar Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
72
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 156 publications
(73 citation statements)
references
References 87 publications
1
72
0
Order By: Relevance
“…For network depths and the size of hidden layers, as the network deepened, the classification accuracies became higher. However, the network depth was not as deep as possible [50,53]. When the network contained more than three layers, the performance did not increase any more.…”
Section: Network Construction and Parameter Optimization Of An S-saementioning
confidence: 96%
See 1 more Smart Citation
“…For network depths and the size of hidden layers, as the network deepened, the classification accuracies became higher. However, the network depth was not as deep as possible [50,53]. When the network contained more than three layers, the performance did not increase any more.…”
Section: Network Construction and Parameter Optimization Of An S-saementioning
confidence: 96%
“…where λ is used to control the regularization strength, u is the number of hidden layers, and Ω weights is the weight attenuation term named L2 regularization. By using the Stochastic gradient descent (SGD) algorithm, the weight matrix and bias are trained and optimized [53].…”
Section: Of 26mentioning
confidence: 99%
“…The labelled benchmarks are too small to train a supervised deep network effectively, and overfitting caused by limited labelled samples is often one of the main causes of performance degradation of the supervised model. To handle this problem, various unsupervised DL models are employed and developed, including the autoencoder (AE) [9,10], the generative adversarial network (GAN) [11,12], and the restricted Boltzmann machine (RBM) [13]. Due to the fact of its simple implementation and attractive computational cost, the AE has widely been used in SAR ATR which minimizes the distortion between the inputs and the reconstructions to guarantee that the mapping process preserves the information of the inputs.…”
Section: Introductionmentioning
confidence: 99%
“…Another way to promote the performance of the AE models with a limited training dataset is to incorporate prior knowledge in the model with certain regularization terms and task-specific cost functions. The training process of the AE refers to estimating the trainable parameters of the model and can be achieved by optimizing the objective function consisting of a reconstruction loss and certain regularization terms [9]. In References [30,31], the supervised information was embedded in the cost function by designing label-related regularization terms.…”
Section: Introductionmentioning
confidence: 99%
“…As is typical of neural networks, the architecture of the autoencoder is exposed to multiple training examples with expected output values, and weights are progressively altered through backpropagation [11], [13]. Eventually, the network converges on a set of connection weights that maximize the correspondence between the inputs and the outputs (which, in this case, are similar to the inputs, albeit generated only from the limited number of hidden nodes).…”
Section: Introductionmentioning
confidence: 99%