2023
DOI: 10.1109/tii.2022.3217758
|View full text |Cite
|
Sign up to set email alerts
|

Dual-Thread Gated Recurrent Unit for Gear Remaining Useful Life Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(18 citation statements)
references
References 32 publications
0
18
0
Order By: Relevance
“…However, the CNN makes it difficult to mine the time dependence between equipment full-life data. Therefore, Zhou et al [30] proposed a new dual-thread gated recurrent unit. This method can not only solve the problem whereby the CNN is not sensitive to the time dependence between data, but it can also improve the prediction ability for complex degradation tracks.…”
Section: Related Workmentioning
confidence: 99%
“…However, the CNN makes it difficult to mine the time dependence between equipment full-life data. Therefore, Zhou et al [30] proposed a new dual-thread gated recurrent unit. This method can not only solve the problem whereby the CNN is not sensitive to the time dependence between data, but it can also improve the prediction ability for complex degradation tracks.…”
Section: Related Workmentioning
confidence: 99%
“…Accordingly, some RNN variants have been investigated for solving the above problems and further improving network performance [24][25][26], among which GRU has received widespread attention. Zhou et al [27] proposed a novel dual-thread GRU architecture to extract both static and non-static information from input data and capture the hidden state difference between two consecutive time steps. Nie and Xie [28] developed an innovative framework based on GRU for fault diagnosis of rotating machinery noise labels.…”
Section: Introductionmentioning
confidence: 99%
“…The gated recurrent unit (GRU) modifies the network structure by adding different gating units to regulate the information transfer in the RNN. RNNs can learn dependencies across relatively long spans without gradient disappearance and gradient explosion [8,9]. Long short-term memory (LSTM) is proposed to enable the memory unit to selectively remember performance degradation data for various value amounts of information by defining various control unit functions [10][11][12].…”
Section: Introductionmentioning
confidence: 99%