2023
DOI: 10.1016/j.knosys.2023.110692
|View full text |Cite
|
Sign up to set email alerts
|

One-stage self-supervised momentum contrastive learning network for open-set cross-domain fault diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…where à = A + I, I is the unit matrix, the node itself is considered, and in order to consider the phenomenon that the feature vectors are continuously summed up when aggregating the features of the neighboring nodes of a node, which leads to a larger feature of the node with more neighboring nodes, symmetric normalization is used in this paper. Constructing a two-layer GCN with ReLU and Softmax as activation functions, the forward propagation formula of the GCN can be expressed in (15) as follows:…”
Section: Node Graph Feature Learning Stagementioning
confidence: 99%
See 1 more Smart Citation
“…where à = A + I, I is the unit matrix, the node itself is considered, and in order to consider the phenomenon that the feature vectors are continuously summed up when aggregating the features of the neighboring nodes of a node, which leads to a larger feature of the node with more neighboring nodes, symmetric normalization is used in this paper. Constructing a two-layer GCN with ReLU and Softmax as activation functions, the forward propagation formula of the GCN can be expressed in (15) as follows:…”
Section: Node Graph Feature Learning Stagementioning
confidence: 99%
“…Therefore, self-supervised learning based on unlabeled data has received more and more attention and research from scholars. Wang et al [ 15 ] proposed a one-stage self-supervised momentum contrastive learning model (OSSMCL) for open-set cross-domain fault diagnosis. The method is based on momentum encoders of self-supervised contrastive learning to capture distinguishable features between sample pairs and incorporates a one-stage framework of the meta-learning paradigm through which OSSMLC can learn to identify new faults with a small number of labeled samples in the target domain.…”
Section: Introductionmentioning
confidence: 99%
“…Most of the above models generally focus on inter-class information (such as cross-entropy), while ignoring intraclass information. The metric-based FSL methods effectively extract inter-class and intra-class information by quantifying the similarity between samples, providing new perspectives for classification or prediction tasks, leading to significant advancements [24][25][26][27]. Chen et al [28] introduced a straightforward visual representation contrast learning framework that is less complex and does not necessitate a specialized architecture.…”
Section: Introductionmentioning
confidence: 99%
“…Small sample fault diagnosis has attracted the attention of researchers and several achievements have been made. With limited labeled samples, many methods have been studied, including transfer learning [11], self-supervised learning [12] and meta-learning [13]. Among them, transfer learning [14,15] uses existing knowledge from sufficient labeled samples to alleviate the need for data in the target domain.…”
Section: Introductionmentioning
confidence: 99%