2020
DOI: 10.1109/tii.2019.2917233
|View full text |Cite
|
Sign up to set email alerts
|

Intelligent Fault Diagnosis for Rotary Machinery Using Transferable Convolutional Neural Network

Abstract: Deep neural networks present very competitive results in mechanical fault diagnosis. However, training deep models require high computing power while the performance of deep architectures in extracting discriminative features for decision making often suffers from the lack of sufficient training data. In this paper, a Transferable Convolutional Neural Network (TCNN) is proposed to improve the learning of target tasks. Firstly, a one-dimension CNN is constructed and pre-trained based on large source task datase… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
69
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 250 publications
(69 citation statements)
references
References 34 publications
0
69
0
Order By: Relevance
“…The local connection and parameter sharing in the CNN reduce the number of parameters, greatly reduce the training complexity, and reduce overfitting. At the same time, its weight-sharing also makes the CNN tolerant of translations, while the down-sampling in the pooling layer further reduces the output parameters and makes the model tolerant to mild deformations, which improves the generalization ability of the model [ 10 , 11 , 12 ]. As shown in Figure 1 , in each feature extraction layer, the feature map performs convolution calculation on multiple convolution kernels, and the feature extraction layers are connected by deviation calculation, activation function, and pooling operation.…”
Section: Basic Theorymentioning
confidence: 99%
“…The local connection and parameter sharing in the CNN reduce the number of parameters, greatly reduce the training complexity, and reduce overfitting. At the same time, its weight-sharing also makes the CNN tolerant of translations, while the down-sampling in the pooling layer further reduces the output parameters and makes the model tolerant to mild deformations, which improves the generalization ability of the model [ 10 , 11 , 12 ]. As shown in Figure 1 , in each feature extraction layer, the feature map performs convolution calculation on multiple convolution kernels, and the feature extraction layers are connected by deviation calculation, activation function, and pooling operation.…”
Section: Basic Theorymentioning
confidence: 99%
“…Transfer learning is considered to have great potential to complete different but similar tasks from the source domain to the target domain [25,26]. Parameter transfer, the most widely applied transfer learning technique, aims to provide valuable parameter knowledge for the target model from a well pre-trained model (source model) [24,27].…”
Section: Introductionmentioning
confidence: 99%
“…With well-located Intelligent fault diagnosis of rotor-bearing system under varying working conditions with modified transfer CNN and thermal images R 2 initial parameters and a small number of target samples, the target model can be quickly fine-tuned to solve the target task. Since 2017, the transfer diagnosis performance of CNNs integrated with parameter transfer has been demonstrated by a few case studies [24,26]. Thus, CNN and parameter transfer can be investigated for fault diagnosis of rotor-bearing system under different working conditions with limited samples.…”
Section: Introductionmentioning
confidence: 99%
“…Deep transfer learning has become a new research direction. According to its basic concept, an adaptive layer is added between the feature extraction and classification layers so that the data on the source and target domains can be distributed more similarly [6][7][8][9][10]. Besides, more expressive features can be extracted automatically by deep transfer learning and it utilizes the self-extraction of deep learning together with transfer learning to acquire "new knowledge", which assist to solve small sample data problem well in few-shot learning [11].…”
Section: Introductionmentioning
confidence: 99%