2018
DOI: 10.1155/2018/8676387
|View full text |Cite
|
Sign up to set email alerts
|

Deep Sparse Autoencoder for Feature Extraction and Diagnosis of Locomotive Adhesion Status

Abstract: The model is difficult to establish because the principle of the locomotive adhesion process is complex. This paper presents a data-driven adhesion status fault diagnosis method based on deep learning theory. The adhesion coefficient and creep speed of a locomotive constitute the characteristic vector. The sparse autoencoder unsupervised learning network studies the input vector, and the single-layer network is superimposed to form a deep neural network. Finally, a small amount of labeled data is used to fine-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(25 citation statements)
references
References 30 publications
0
25
0
Order By: Relevance
“…where S 2 is the neuron count in the internal layer and β is the weight of sparse penalty term [53]. Fig 4 represents the heatmap of the candidate features selected by Adaptive WSO.…”
Section: E Sparse Autoencoder Neural Networkmentioning
confidence: 99%
“…where S 2 is the neuron count in the internal layer and β is the weight of sparse penalty term [53]. Fig 4 represents the heatmap of the candidate features selected by Adaptive WSO.…”
Section: E Sparse Autoencoder Neural Networkmentioning
confidence: 99%
“…However, whether there were differences between each kind of behavior signal requires more study on characteristic distribution of each kind of sample signal generated by our network. The unidimensional Convolutional Auto-Encoder (CAE) can effectively show the deep characteristic differences between different samples [ 26 , 27 , 28 ]. Therefore, the output of the auto-encoder was set as a two-dimensional vector.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…The autoencoder can learn the latent representation of features in a reduced space in an unsupervised setting. The DSAE builds on the concept of an autoencoder adding the sparse penalty term, which hinders feature learning to achieve a concise representation of the input vector [ 25 , 26 ]. Furthermore, an autoencoder using non-linear activation functions and multiple layers possesses the ability to obtain non-linear relationships, unlike Principal Component Analysis (PCA).…”
Section: Proposed Methodologymentioning
confidence: 99%
“…Following [ 25 ], the input vector x defines sets of NetFlow data , which are later reconstructed into an dataset; therefore, . These NetFlow data are used as an input matrix X initially condensed to a lower dimension that is expressed by a set of one or more hidden layers .…”
Section: Proposed Methodologymentioning
confidence: 99%