2020
DOI: 10.1109/access.2020.2972132
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative Auto-Encoder With Local and Global Graph Embedding

Abstract: In order to exploit the potential intrinsic low-dimensional structure of the high-dimensional data from the manifold learning perspective, we propose a global graph embedding with globality-preserving property, which requires that samples should be mapped close to their low-dimensional class representation data distribution centers in the embedding space. Then we propose a novel local and global graph embedding auto-encoder(LGAE) to capture the geometric structure of data, its cost function have three terms, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…Different classifier confidence levels were defined according to the distance between classifiers given by the confusion matrix; then the values and posterior probabilities of the support vector machines were integrated into the basic probability assignments to achieve a recognition method that combines support vector machines with the evidence theory. Li et al [ 32 ] proposed an extreme learning machine (ELM)‐based recognition method by introducing the L21 norm to reduce the undesirable effects of data noise points and outliers, which made the ELM model more stable. The machine learning method achieves automatic feature extraction, while the above methods extract features by considering HRRP as a whole, which ignores the correlation and temporal information between HRRPs, resulting in information loss.…”
Section: Related Workmentioning
confidence: 99%
“…Different classifier confidence levels were defined according to the distance between classifiers given by the confusion matrix; then the values and posterior probabilities of the support vector machines were integrated into the basic probability assignments to achieve a recognition method that combines support vector machines with the evidence theory. Li et al [ 32 ] proposed an extreme learning machine (ELM)‐based recognition method by introducing the L21 norm to reduce the undesirable effects of data noise points and outliers, which made the ELM model more stable. The machine learning method achieves automatic feature extraction, while the above methods extract features by considering HRRP as a whole, which ignores the correlation and temporal information between HRRPs, resulting in information loss.…”
Section: Related Workmentioning
confidence: 99%
“…Unlike shallow neural networks, deep learning models can directly use the original data as input and learn data features layer-by-layer through a multilayer model, thus resulting in more effective feature extraction [17]. Currently, deep belief networks (DBN) [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32], autoencoders (AE) [33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52], convolutional neural networks (CNN) [53][54][55][56][57][58][59][60], and recurrent neural networks (RNN) [61][62][63][64][65]…”
Section: Deep Learning Theorymentioning
confidence: 99%
“…A common three-layer unsupervised feature learning model is the autoencoder (AE). The output can be restored to the input as closely as feasible using adaptive learning features [33][34][35]. The corresponding autoencoding network model has evolved according to different standards for defining feature expression, such as sparsity features, noise reduction features, regular constraint features, and so on.…”
Section: Self-encoding Networkmentioning
confidence: 99%
See 1 more Smart Citation