2019
DOI: 10.1155/2019/8325218
|View full text |Cite
|
Sign up to set email alerts
|

Fault Diagnosis of Induction Motors Using Recurrence Quantification Analysis and LSTM with Weighted BN

Abstract: Motor fault diagnosis has gained much attention from academic research and industry to guarantee motor reliability. Generally, there exist two major approaches in the feature engineering for motor fault diagnosis: (1) traditional feature learning, which heavily depends on manual feature extraction, is often unable to discover the important underlying representations of faulty motors; (2) state-of-the-art deep learning techniques, which have somewhat improved diagnostic performance, while the intrinsic characte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 37 publications
(15 citation statements)
references
References 27 publications
0
15
0
Order By: Relevance
“…Xiao et al [107] have employed LSTM with weighted batch normalization (BN) for detecting different faults in the induction motor.…”
Section: Recurrent Neural Network (Rnn)mentioning
confidence: 99%
“…Xiao et al [107] have employed LSTM with weighted batch normalization (BN) for detecting different faults in the induction motor.…”
Section: Recurrent Neural Network (Rnn)mentioning
confidence: 99%
“…recurrence plots (RPs) and recurrence quantification analysis (RQA). The proposed method is applied to the analysis of short-time series and has been successfully used in the diagnostics of mechanical systems [24][25][26].…”
Section: Introductionmentioning
confidence: 99%
“…The prediction model trained on one dataset can be used to predict the relevant target dataset by fine-tuning the TL strategy. In addition, adaptive batch normalization (AdaBN) [22], [23] algorithm can also be used to improve the domain adaptability of the prediction model. Initially, batch normalization (BN) was used to help stochastic gradient descent (SGD) optimization by adjusting the distribution of each layer of the output in the network [24].…”
Section: Introductionmentioning
confidence: 99%