2022
DOI: 10.1016/j.ins.2022.09.024
|View full text |Cite
|
Sign up to set email alerts
|

Negative samples selecting strategy for graph contrastive learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…GraphCL [ 37 ], during the data augmentation process, incorporates a comprehensive set of random augmentation strategies, considering both topological structure and node features. On the other hand, the contrastive learning model with a negative sample sampling strategy [ 38 ] effectively converted all nodes except the positive sample into negative samples by selecting nodes with labels different from the center node. In addition, it utilized GCN [ 39 ], SGC [ 40 ], and APPNP [ 41 ] as shared graph neural network models.…”
Section: Related Workmentioning
confidence: 99%
“…GraphCL [ 37 ], during the data augmentation process, incorporates a comprehensive set of random augmentation strategies, considering both topological structure and node features. On the other hand, the contrastive learning model with a negative sample sampling strategy [ 38 ] effectively converted all nodes except the positive sample into negative samples by selecting nodes with labels different from the center node. In addition, it utilized GCN [ 39 ], SGC [ 40 ], and APPNP [ 41 ] as shared graph neural network models.…”
Section: Related Workmentioning
confidence: 99%
“…For rolling bearing, obtaining complete labeled data for single and compound faults is a very difficult and costly task, and how to solve the above problem from unlabeled data obtained from experiments or production has become a new topic in bearing-rotor system fault diagnosis [ 19 ]. GCL is a self-supervised learning algorithm for graph data, which aims to train a graph encoder on a given large amount of unlabeled graph data to obtain the feature representation vector of the graph [ 20 ]. The general process is similar to traditional CL, with the advantage of data augmentation of graph signals and contrast hierarchy enhancement [ 21 ].…”
Section: Introductionmentioning
confidence: 99%