2023
DOI: 10.17531/ein/162937
|View full text |Cite
|
Sign up to set email alerts
|

Research on Fault Diagnosis of Highway Bi-LSTM Based on Attention Mechanism

Abstract: Deep groove ball bearings are widely used in rotary machinery. Accurate for bearing faults diagnosis is essential for equipment maintenance. For common depth learning methods, the feature extraction of inverse time domain signal direction and the attention to key features are usually ignored. Based on the long short term memory(LSTM) network, this study proposes an attention-based highway bidirectional long short term memory (AHBi-LSTM) network for fault diagnosis based on the raw vibration signal. By increasi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…To demonstrate the superiority of our proposed model, we conducted comparisons with the Bi-LSTM [36], CNN_1D [37], and swin transformer (Swin_T) [38]. As shown in the figure 9, we plotted the ROC curves for each method on the CWRU dataset, and the area under the curve indicates that the transfer learning-based bearing diagnosis accuracy of our proposed method is higher than that of other models.…”
Section: Case Western Reserve University (Cwru) Bearing Dataset Exper...mentioning
confidence: 99%
“…To demonstrate the superiority of our proposed model, we conducted comparisons with the Bi-LSTM [36], CNN_1D [37], and swin transformer (Swin_T) [38]. As shown in the figure 9, we plotted the ROC curves for each method on the CWRU dataset, and the area under the curve indicates that the transfer learning-based bearing diagnosis accuracy of our proposed method is higher than that of other models.…”
Section: Case Western Reserve University (Cwru) Bearing Dataset Exper...mentioning
confidence: 99%
“…where 𝜎 is the sigmoid activation function, ℎ is the hidden layer state vector at time t-1, 𝑥 is the input vector at time t, 𝑏 is the bias of the corresponding gate, and 𝑤 is the weight of the corresponding gate [5] .…”
Section: Long Short-term Memory(lstm)mentioning
confidence: 99%
“…Many experiments have demonstrated that the BN can significantly reduce the number of iterations while improving the final model performance. The BN is already a necessary part of many top-level architectures such as ResNet [34] and Inception V3 [35].…”
Section: Batch Normalizationmentioning
confidence: 99%