2020 2nd International Conference on Applied Machine Learning (ICAML) 2020
DOI: 10.1109/icaml51583.2020.00036
|View full text |Cite
|
Sign up to set email alerts
|

Time Series Anomaly Detection Based on Graph Convolutional Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
227
0
4

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 154 publications
(234 citation statements)
references
References 19 publications
3
227
0
4
Order By: Relevance
“…L-layer GCN has been proven to essentially simulate an L-order polynomial filter with fixed coefficients, which limits the expressive power of the model and leads to over-smoothing (Wu, et al, 2019). As a result, GCN achieves its best performance in many graph-based problems via shallow architectures, which are incapable of extracting information from highorder neighbors.…”
Section: Gcn With Initial Residual and Identity Mappingmentioning
confidence: 99%
“…L-layer GCN has been proven to essentially simulate an L-order polynomial filter with fixed coefficients, which limits the expressive power of the model and leads to over-smoothing (Wu, et al, 2019). As a result, GCN achieves its best performance in many graph-based problems via shallow architectures, which are incapable of extracting information from highorder neighbors.…”
Section: Gcn With Initial Residual and Identity Mappingmentioning
confidence: 99%
“…Avenues for future improvement divert focus from a deep architecture that chains 1-hop convolutional layers to an alternative strategy that aggregates outputs from multiple shallow networks whose convolutions encode richer, multihop diffusion operators. 28,29 To sustain performance for different depth limits, jumping knowledge concatenation was necessary. For model variants that do not incorporate layer aggregation, decreased performance for depth limit two networks that may result from oversmoothing, whereby node hidden states converge to an almost uniform distribution and local neighborhood information is lost.…”
Section: T H Imentioning
confidence: 99%
“…And the rest of the graph kernel methods that are usually used in graph classification are employed in the following comparison of the graph classification task. Aside from unsupervised approaches, we also use five popular semisupervised methods for comparison, including Planetoid [6], Chebyshev [44], GCN [20], SGC [40], and GAT [36].…”
Section: Baselinesmentioning
confidence: 99%