2022
DOI: 10.3390/e24081087
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Deep Transfer Learning Method for Intelligent Fault Diagnosis Based on Variational Mode Decomposition and Efficient Channel Attention

Abstract: In recent years, deep learning has been applied to intelligent fault diagnosis and has achieved great success. However, the fault diagnosis method of deep learning assumes that the training dataset and the test dataset are obtained under the same operating conditions. This condition can hardly be met in real application scenarios. Additionally, signal preprocessing technology also has an important influence on intelligent fault diagnosis. How to effectively relate signal preprocessing to a transfer diagnostic … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…Efficient channel attention requires only a few parameters to produce remarkable results [42]. Due to the superior performance of ECA-Net, many studies have been conducted to use it for the adaptation of channel feature weights [43,44]. However, the ECA-Net input is mainly two-dimensional features, and this study improves its input structure for channel weight adaptation of one-dimensional features.…”
Section: Eca-netmentioning
confidence: 99%
See 1 more Smart Citation
“…Efficient channel attention requires only a few parameters to produce remarkable results [42]. Due to the superior performance of ECA-Net, many studies have been conducted to use it for the adaptation of channel feature weights [43,44]. However, the ECA-Net input is mainly two-dimensional features, and this study improves its input structure for channel weight adaptation of one-dimensional features.…”
Section: Eca-netmentioning
confidence: 99%
“…The experiments were all performed under the CWRU dataset A. The number of convolutional kernels from large to small was established, and the effect of using a larger number of convolutional kernels was compared according to the study of Liu et al [44]. The number of channels and the size of the convolutional kernel are initially set for common convolutional kernel sizes.…”
Section: Influence Of Wide Filters and Convolution Channel Sizementioning
confidence: 99%