2022
DOI: 10.48550/arxiv.2212.01141
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MHCCL: Masked Hierarchical Cluster-wise Contrastive Learning for Multivariate Time Series

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…To evaluate performance, we adopt two metrics, i.e., Accuracy (Accu.) and Macro-averaged F1-Score (MF1) (Eldele et al 2021;Meng et al 2022). Besides, to reduce the effect of random initialization, we conduct ten times for all experiments and take the average results for comparisons.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To evaluate performance, we adopt two metrics, i.e., Accuracy (Accu.) and Macro-averaged F1-Score (MF1) (Eldele et al 2021;Meng et al 2022). Besides, to reduce the effect of random initialization, we conduct ten times for all experiments and take the average results for comparisons.…”
Section: Resultsmentioning
confidence: 99%
“…We compare our method with SOTA methods, including TNC (Tonekaboni, Eytan, and Goldenberg 2021), TS-TCC (Eldele et al 2021), TS2Vec (Yue et al 2022), MHCCL (Meng et al 2022), CaSS (Chen et al 2022), and TAGCN (Zhang et al 2023d). All methods are reimplemented based on their original settings except for the encoders, which are replaced by the same encoder as ours for fair comparisons.…”
Section: Comparisons With State-of-the-artsmentioning
confidence: 99%