2023
DOI: 10.1088/1361-6501/ad0705
|View full text |Cite
|
Sign up to set email alerts
|

A novel vision transformer network for rolling bearing remaining useful life prediction

Aijun Hu,
Yancheng Zhu,
Suixian Liu
et al.

Abstract: The accurate predictions of remaining useful life (RUL) have become a key and extremely challenging problem. Due to the limitations of the classical convolutional neural network (CNN) and recurrent neural network (RNN) structure, the attention mechanism has been introduced to improve feature representation of the long-term bearing degradation data. Transformer network based on attention mechanism is successfully applied in many fields and recognized as an excellent creation for deep learning models. In this pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…Ren et al [163] believed that time series with different lengths have their own characteristics, and proposed a dynamic length transformer to adaptively learn characteristics of time series with different lengths for machinery RUL prediction tasks. In addition, [198][199][200][201] also developed various advanced prediction networks based on transformer model. However, Transformers require extensive computational resources and memory due to their self-attention mechanism and the need to process the entire sequence simultaneously.…”
Section: Cutting-edge Methods In DLmentioning
confidence: 99%
“…Ren et al [163] believed that time series with different lengths have their own characteristics, and proposed a dynamic length transformer to adaptively learn characteristics of time series with different lengths for machinery RUL prediction tasks. In addition, [198][199][200][201] also developed various advanced prediction networks based on transformer model. However, Transformers require extensive computational resources and memory due to their self-attention mechanism and the need to process the entire sequence simultaneously.…”
Section: Cutting-edge Methods In DLmentioning
confidence: 99%