2020
DOI: 10.1109/tii.2019.2943898
|View full text |Cite
|
Sign up to set email alerts
|

Deep Residual Shrinkage Networks for Fault Diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
329
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 889 publications
(331 citation statements)
references
References 22 publications
1
329
0
1
Order By: Relevance
“…(1) Enough source-domain samples are used to pre-train a source modified CNN by minimizing cross-entropy error between the true and predicted labels according to Eqs. (7)(8)(9). (2) Prepare a target modified CNN holding the same structure and hyperparameters as the source model.…”
Section: B Construction Of Modified Transfer Cnnmentioning
confidence: 99%
See 1 more Smart Citation
“…(1) Enough source-domain samples are used to pre-train a source modified CNN by minimizing cross-entropy error between the true and predicted labels according to Eqs. (7)(8)(9). (2) Prepare a target modified CNN holding the same structure and hyperparameters as the source model.…”
Section: B Construction Of Modified Transfer Cnnmentioning
confidence: 99%
“…A large number of studies have claimed that the diagnosis results of different shallow learning models are largely affected by the effectiveness of extracted features [7][8][9]. Recently, more and more attention has been paid to deep learning-based approaches with automatic feature learning capability, such as deep belief network (DBN), stacked auto-encoder (SAE), convolutional neural network (CNN), long short-term memory (LSTM), etc.…”
Section: Introductionmentioning
confidence: 99%
“…whereĤ is the channel matrix obtained by the estimator, represents the SNR of the communication chain, and I N is the identity matrix of size N Ɨ N . [26]. This structure consists of a series of residual shrink building units (RSBUs).…”
Section: Channel Equalisermentioning
confidence: 99%
“…The number of convolution kernels of the first and the second convolutional layers was 32, while the numbers of convolution kernels for the four modules in the dashed box were 64, 96, 128, and 160, respectively. For the module in the dashed box, we adopted a residual connection method [31] similar to the deep residual network to make the model fully use the global context information, and introduced a Squeezeand-Excitation (SE) module [32] to improve the sensitivity of our model for channel features. When the convolutional network performs convolution layer by layer, the obtained features are transferred from details (textures, lines, etc.)…”
Section: Description Of Cnn Layersmentioning
confidence: 99%