2022
DOI: 10.1016/j.measurement.2021.110242
|View full text |Cite
|
Sign up to set email alerts
|

Fault diagnosis for small samples based on attention mechanism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 124 publications
(54 citation statements)
references
References 30 publications
0
54
0
Order By: Relevance
“…Although some papers have paid attention to fault diagnosis under the limited fault samples. [19][20][21][22] Note that the existing fault diagnosis methods with limited samples generally need to introduce new technologies such as generative adversarial network, transfer learning into the classification model. These overmuch steps aggravate the difficulty of applying the method to practical fault diagnosis.…”
Section: Introductionmentioning
confidence: 99%
“…Although some papers have paid attention to fault diagnosis under the limited fault samples. [19][20][21][22] Note that the existing fault diagnosis methods with limited samples generally need to introduce new technologies such as generative adversarial network, transfer learning into the classification model. These overmuch steps aggravate the difficulty of applying the method to practical fault diagnosis.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, it shows that the FDMTF–MBRCNN method can achieve good results with the small sample problem. According to Zhang et al 50 if the proportion of the training set is less than 50%, it can be regarded as a small sample problem. On the other hand, these diagnosis results fluctuate in the range of 0.929–0.996 obtained by the methods proposed in different literatures based on UoC gearbox dataset, without exception, less than the recognition accuracy of the proposed FDMTF–MBRCNN method.…”
Section: Case Studymentioning
confidence: 99%
“…Based on these two real needs, we conduct experiments to verify the classification performance of the proposed model under noisy and small data settings. We compare our method with other SOTA methods, DCA-BiGRU [25], AResNet [26], RNN-WDCNN [12], MA1DCNN [11], WDCNN [10]. Because codes of all these counterpart models are publicly available, we replicate them by their code and utilize the same data preprocessing.…”
Section: Classification Performancementioning
confidence: 99%