2020
DOI: 10.3390/s20051533
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Scale U-Shaped Convolution Auto-Encoder Based on Pyramid Pooling Module for Object Recognition in Synthetic Aperture Radar Images

Abstract: Although unsupervised representation learning (RL) can tackle the performance deterioration caused by limited labeled data in synthetic aperture radar (SAR) object classification, the neglected discriminative detailed information and the ignored distinctive characteristics of SAR images can lead to performance degradation. In this paper, an unsupervised multi-scale convolution auto-encoder (MSCAE) was proposed which can simultaneously obtain the global features and local characteristics of targets with its U-s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 55 publications
0
5
0
Order By: Relevance
“…However, under EOC, images of all the serial numbers were used to test the performance of the proposed method [40]. Following the setup of the EOC experimental method in [41], we used MSTAR datasets with different configurations for evaluation. Due to insufficient training samples, we did not evaluate the performance of the proposed model at the variance of the depression angle.…”
Section: Dataset Descriptionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, under EOC, images of all the serial numbers were used to test the performance of the proposed method [40]. Following the setup of the EOC experimental method in [41], we used MSTAR datasets with different configurations for evaluation. Due to insufficient training samples, we did not evaluate the performance of the proposed model at the variance of the depression angle.…”
Section: Dataset Descriptionmentioning
confidence: 99%
“…However, under EOC, images of all the serial numbers were used to test the performance of the proposed method [40]. Following the setup of the EOC experimental method in [41], we used MSTAR datasets with different config-…”
Section: Dataset Descriptionmentioning
confidence: 99%
“…The convolution denoising autoencoder combines the local convolution connection with the autoencoder. Specifically, convolutional operations are used to replace fully connected layers in encoders, while deconvolution operation is used to replace fully connected layers in decoders [ 25 ]. Training deep neural networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change [ 26 ].…”
Section: Wire Rope Image Representationmentioning
confidence: 99%
“…Regardless of the category of the algorithm, it is supported by features of three types: (1) Geometric features that describe the target by its area [ 5 , 6 , 7 , 8 , 9 ], contour [ 10 , 11 ] or shadow [ 11 , 12 ]; (2) Transformation features that reduce the dimensionality of the target data by representing it in another domain such as Discrete Cosine Transform (DCT) [ 13 ], Non-Negative Matrix Factorization (NMF) [ 14 , 15 ], Linear Discriminant Analysis (LDA) [ 16 ] and Principal Component Analysis (PCA) [ 17 ]; and (3) Scattering Centers Features which are based on the highest amplitude returns of the targets [ 18 ] and based on a statistical distance, such as Euclidean [ 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 ], Mahalanobis [ 27 , 28 , 29 , 30 ], or another statistical distance [ 31 , 32 , 33 , 34 , 35 , 36 , 37 ].…”
Section: Introductionmentioning
confidence: 99%
“…Feature-based algorithms are those with methods that run offline training supported exclusively by features extracted from the targets of interest. Among the methods employed by feature-based algorithms, we can highlight the following: Template Matching (TM) [ 5 , 6 , 7 , 11 , 30 , 37 ], Hidden Markov Model (HMM) [ 12 , 13 , 22 ], K-Nearest Neighbor (KNN) [ 27 , 28 ], Sparse Representation-based Classification (SRC) [ 8 , 29 ], Convolutional Neural Networks (CNN) [ 17 , 18 , 36 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 ], Support Vectors Machine (SVM) [ 9 ] and Gaussian Mixture Model (GMM) [ 10 ].…”
Section: Introductionmentioning
confidence: 99%