2022
DOI: 10.1007/s40430-022-03950-9
|View full text |Cite
|
Sign up to set email alerts
|

Deep dynamic adaptation network: a deep transfer learning framework for rolling bearing fault diagnosis under variable working conditions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 48 publications
0
4
0
Order By: Relevance
“…Table 2 provides detailed information on the four operating conditions of the test platform. To obtain sufficient samples, the vibration signal recordings were segmented into short pieces with 2048 data points (0.1707 s) using overlapping sampling [42]. There are 1024 overlapping data points between two adjacent pieces.…”
Section: Dataset Descriptionmentioning
confidence: 99%
“…Table 2 provides detailed information on the four operating conditions of the test platform. To obtain sufficient samples, the vibration signal recordings were segmented into short pieces with 2048 data points (0.1707 s) using overlapping sampling [42]. There are 1024 overlapping data points between two adjacent pieces.…”
Section: Dataset Descriptionmentioning
confidence: 99%
“…To substantiate the superior performance of the DTCNN-SJM framework and advantages of the proposed SJM, we constructed a variety of fault diagnosis models for comparative analysis. These models utilize both conventional and widely-recognized methods such as KNN, SVM, Softmax, JDA, TCA, BDA, GFK, MEDA, JGSA, TJM, CORAL [52], EasyTL [53], JPDA [54], MEKT [55], STL [56] and SA [57]. Accordingly, the Table 6 lists 18 comparative models built by these methods, DTCNN and SJM.…”
Section: (3) Comparative Experimentsmentioning
confidence: 99%
“…The number of iterations T, t. Output: Domain-adaptive classifier f. 1: Maximize the variance matrix of source and target data by equation ( 4) and utilize the label information of the source data by equation ( 5) to minimize the within-class scatter matrix and maximizing the between-class scatter matrix. 2: Align the subspaces of the Ds and Dt by equation (10), and then obtain subspace transferable features by Zs = X T s AΦ , Zt = X T t B. 3: Train a base classifier using Zs, then exploit it to predict Zt and obtain its pseudo labels y * t .…”
Section: Pseudo-label Correctionmentioning
confidence: 99%
“…However, obtaining high-quality labeled data from various working scenarios is often impractical [9]. Firstly, rotating machinery typically operates under healthy conditions, resulting in far more healthy data than failure data [10]. Additionally, manually labeling data from different working conditions is timeconsuming and costly.…”
Section: Introductionmentioning
confidence: 99%