2021
DOI: 10.1109/tim.2021.3076841
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Multiscale Lightweight Fault Diagnosis Model Based on the Idea of Adversarial Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…Considering the harsh working environment of a coal mine, the collected idler running signals inevitably interfere with various noises, setting a higher standard for the robustness of the MSCNN-ELM method. Therefore, noise with various signal-to-noise ratios (SNR) ranging from 15 to −2.5 dB with an interval of −2.5 dB were added to the original vibration signal to simulate noise interference in industrial production [33]. The processed signals were converted into datasets and input into the MSCNN-ELM model to test the anti-interference performance of the model.…”
Section: The Effect Of Noise On the Modelmentioning
confidence: 99%
“…Considering the harsh working environment of a coal mine, the collected idler running signals inevitably interfere with various noises, setting a higher standard for the robustness of the MSCNN-ELM method. Therefore, noise with various signal-to-noise ratios (SNR) ranging from 15 to −2.5 dB with an interval of −2.5 dB were added to the original vibration signal to simulate noise interference in industrial production [33]. The processed signals were converted into datasets and input into the MSCNN-ELM model to test the anti-interference performance of the model.…”
Section: The Effect Of Noise On the Modelmentioning
confidence: 99%
“…Hence, the objective of this study is to mitigate the computational complexity of GMA-DRSN through the employment of a lightweight strategy while striving to uphold its superior diagnostic capacity. The currently prevalent lightweight strategies consist primarily of pruning [28,29], knowledge distillation [30,31], and lightweight module design [32,33]. For example, Zhu et al [28] enhanced the training efficiency of their denoising autoencoder network by utilizing a pruning strategy.…”
Section: Introductionmentioning
confidence: 99%
“…Deng et al [30] proposed a lightweight network for imbalance fault diagnosis through a knowledge distillation strategy. Zhang et al [33] implemented a lightweight design of the network by using a depthwise separable convolution module. However, both pruning and knowledge distillation strategies necessitate a large and complex model as the cutting object or teacher, which implies the presence of abundant training data.…”
Section: Introductionmentioning
confidence: 99%
“…To address the aforementioned challenges, the adoption of lightweight strategies has emerged as a promising avenue. These strategies encompass techniques such as pruning [35,36], knowledge distillation [37,38], and lightweight design [39,40], which collectively offer effective solutions. For instance, Zhang et al [36] introduced a microscopic neural structure search approach, seamlessly integrated with pruning techniques to efficiently generate sub-networks characterized by reduced complexity and computational demands.…”
Section: Introductionmentioning
confidence: 99%