2023
DOI: 10.1088/1361-6501/acb0e9
|View full text |Cite|
|
Sign up to set email alerts
|

Remaining useful life prediction of bearings based on self-attention mechanism, multi-scale dilated causal convolution, and temporal convolution network

Abstract: Effective remaining useful life (RUL) prediction of bearings is essential for the predictive maintenance of rotating machinery. However, the effectiveness of many existing RUL prediction methods depends on expert experience and signal processing algorithms, which limiting the application of these methods in real-life scenarios. This study proposes a novel end-to-end deep learning framework consisting of a multi-scale attention-based dilated causal convolutional (MADCC) module and a multi-layer temporal convolu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(14 citation statements)
references
References 36 publications
0
14
0
Order By: Relevance
“…To overcome the above problems, TCN uses dilated convolutions to realize the receiving domain of finite layer network with exponential size [25]. For one-dimensional time series input…”
Section: Dilated Convolutionsmentioning
confidence: 99%
“…To overcome the above problems, TCN uses dilated convolutions to realize the receiving domain of finite layer network with exponential size [25]. For one-dimensional time series input…”
Section: Dilated Convolutionsmentioning
confidence: 99%
“…The attention mechanism is a structure specifically designed to handle the interdependence between input and output data, which can automatically learn and calculate the weight of each input data's contribution to the output data [35]. In the field of deep learning, attention mechanisms have become an important technology widely used in various tasks, especially in natural language processing.…”
Section: Mhamentioning
confidence: 99%
“…Table 5 presents the influence of different cores and ranks on prediction accuracy. Employing the TT-SVD in Algorithm 1, when the input data consists of 14 dimensions, setting the approximate accuracy ε = 0.05 enables us to determine the range of TT rank and TT core that impact model parameters and training time which are (1,4) and (1,5), respectively. Based on the experimental findings, it can be concluded that Tensor-Train decomposition significantly contributes to parameter reduction and shorter training times.…”
Section: Ablation Analysis the Ablation Experiments In Tablementioning
confidence: 99%
“…Data-driven modelling methods utilize available data to unveil concealed patterns, facilitating accurate time series forecasting [5]. Time series forecasting techniques can be broadly categorized into traditional machine learning and deep learning models.…”
Section: Introductionmentioning
confidence: 99%