2019
DOI: 10.1155/2019/8985657
|View full text |Cite
|
Sign up to set email alerts
|

Novel Model Based on Stacked Autoencoders with Sample‐Wise Strategy for Fault Diagnosis

Abstract: Autoencoders are used for fault diagnosis in chemical engineering. To improve their performance, experts have paid close attention to regularized strategies and the creation of new and effective cost functions. However, existing methods are modified on the basis of only one model. This study provides a new perspective for strengthening the fault diagnosis model, which attempts to gain useful information from a model (teacher model) and applies it to a new model (student model). It pretrains the teacher model b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Supervised finetuning of the trained network by backpropagation forms the third phase. Spectral features generated from this pre-training perform better than the traditional information extraction strategies (Kong & Yan, 2019).…”
Section: Autoencodersmentioning
confidence: 98%
“…Supervised finetuning of the trained network by backpropagation forms the third phase. Spectral features generated from this pre-training perform better than the traditional information extraction strategies (Kong & Yan, 2019).…”
Section: Autoencodersmentioning
confidence: 98%
“…Generally, this problem prevented the training of very deep neural networks and was referred to as the vanishing gradient problem. This problem can be reduced considerably by the process of pre-training (Diehao Kong et al, 2019). Features learned by pre-training a deep autoencoder structure produce spectral features that outperform conventional feature extraction methods.…”
Section: Deep Autoencodersmentioning
confidence: 99%
“…Pre-training is based on the assumption that it is easier to train a shallow network instead of a deep network, which also reduces generalization error. Deep neural networks can easily jump out of local minima with the help of pretraining (Diehao Kong et al, 2019).…”
Section: Quality Measuresmentioning
confidence: 99%