2019
DOI: 10.1109/access.2019.2894014
|View full text |Cite
|
Sign up to set email alerts
|

Stacked Denoising Extreme Learning Machine Autoencoder Based on Graph Embedding for Feature Representation

Abstract: Extreme learning machine is characterized by less training parameters, fast training speed, and strong generalization ability. It has been applied to obtain feature representations from the complex data in the tasks of data clustering or classification. In this paper, a graph embedding-based denoising extreme learning machine autoencoder (GDELM-AE) is proposed for capturing the structure of the inputs. Specifically, in GDELM-AE, a graph embedding framework that contains an intrinsic graph and a penalty graph c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(10 citation statements)
references
References 28 publications
0
10
0
Order By: Relevance
“…Inspired by popular learning, K. Sun et al [33] proposed a regularized ELM-AE algorithm, which adds popular regularization constraints to the ELM-AE loss function to learn local geometric preservation representations. Similarly, H. Ge et al [34] incorporated local Fisher discriminant analysis (LFDA) into the ELM-AE loss function and proposed the GDELM-AE algorithm to identify local geometry and global discriminant knowledge in the representation space.…”
Section: Discriminative Elmsmentioning
confidence: 99%
See 1 more Smart Citation
“…Inspired by popular learning, K. Sun et al [33] proposed a regularized ELM-AE algorithm, which adds popular regularization constraints to the ELM-AE loss function to learn local geometric preservation representations. Similarly, H. Ge et al [34] incorporated local Fisher discriminant analysis (LFDA) into the ELM-AE loss function and proposed the GDELM-AE algorithm to identify local geometry and global discriminant knowledge in the representation space.…”
Section: Discriminative Elmsmentioning
confidence: 99%
“…In the generalized ELM autoencoder (GELM-AE) introduced by K. Sun et al [33], manifold regularization is performed to restrict the ELM-AE to learn local-geometry-preserving representations. To determine both local geometry and global discriminatory information in the representation space, H. Ge et al [34] developed a graph-embedded denoising ELM autoencoder (GDELM-AE) by integrating local Fisher discrimination analysis into the ELM-AE. Inspired by these studies, we incorporate the geometric information of given data into the recognition model to reduce the effect of small intra-class and large inter-class differences on aircraft recognition models.…”
Section: Introductionmentioning
confidence: 99%
“…The graph Laplacian-based manifold regularisation was added to the loss function of the ELM-AE (GELM-AE) in [13], where the Laplacian matrix was built for the reconstruction output. Based on the GELM-AE, the denoising GELM-AE has been developed in [14] by adding the mask noises into the inputs and reconstructing the noise-free inputs as the targets. Different from conventional AEs, Yang et al [15] proposed a generalised ELM-AE that reconstructed not only the original but also the adjacent points exploiting the weights built on the graph Laplacian with respect to the inputs.…”
Section: Introductionmentioning
confidence: 99%
“…The graph Laplacian‐based manifold regularisation was added to the loss function of the ELM‐AE (GELM‐AE) in [13], where the Laplacian matrix was built for the reconstruction output. Based on the GELM‐AE, the denoising GELM‐AE has been developed in [14] by adding the mask noises into the inputs and reconstructing the noise‐free inputs as the targets. Different from conventional AEs, Yang et al.…”
Section: Introductionmentioning
confidence: 99%
“…ELM-AE [13] exploits the intrinsic information of unlabeled data, and GELM-AE [74] improves ELM-AE to discover the latent manifold structure of the input data by integrating the manifold regularization. Furthermore, GDELM-AE [127] integrates LFDA into ELM-AE to discover both local and global structure of the input data. However, LFDA is based on the assumption that the data is Gaussian In LDELM-AE, the within-class compactness is characterized as the following:…”
Section: Local Discriminant Preserving Extreme Learning Machine Autoe...mentioning
confidence: 99%