ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022
DOI: 10.1109/icassp43922.2022.9747461
|View full text |Cite
|
Sign up to set email alerts
|

Improving Anomaly Detection with a Self-Supervised Task Based on Generative Adversarial Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Although existing multi-task models take into account sentiment information, they still obtain a lower performance on SemEval-16. In terms of F M ac , the SMTL-HW model surpasses JOINT [10], MTIN [78], AT-JSS-Lex [11], and MT-LRM-BERT [51] models by 12.3%, 7.6%, 7.1%, and 5%, respectively. These results highlight the effectiveness of the main components incorporated in our SMTL-HW model, namely the sequential architecture, and task weighting.…”
Section: Comparisons With Previous Studiesmentioning
confidence: 92%
See 1 more Smart Citation
“…Although existing multi-task models take into account sentiment information, they still obtain a lower performance on SemEval-16. In terms of F M ac , the SMTL-HW model surpasses JOINT [10], MTIN [78], AT-JSS-Lex [11], and MT-LRM-BERT [51] models by 12.3%, 7.6%, 7.1%, and 5%, respectively. These results highlight the effectiveness of the main components incorporated in our SMTL-HW model, namely the sequential architecture, and task weighting.…”
Section: Comparisons With Previous Studiesmentioning
confidence: 92%
“…MTIN [78] is a Multi-Task Interaction Network model that simultaneously learns stance and sentiment with a word-level task interaction and task-related graphs.…”
Section: Comparisons With Previous Studiesmentioning
confidence: 99%
“…Creating supervision signals from data itself is an essence insight in self-supervised learning. Inspired by self-supervised research in image anomaly detection [8], some self-supervised methods are also designed for time series data. They assign class labels to different augmentation operations (e.g., adding noise, reversing, scaling, and smoothing) [54], neural transformations [38], contiguous and separate time segments [9], or different time resolutions [18].…”
Section: Related Workmentioning
confidence: 99%