Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/548
|View full text |Cite
|
Sign up to set email alerts
|

Spatio-Temporal Attentive RNN for Node Classification in Temporal Attributed Graphs

Abstract: Node classification in graph-structured data aims to classify the nodes where labels are only available for a subset of nodes. This problem has attracted considerable research efforts in recent years. In real-world applications, both graph topology and node attributes evolve over time. Existing techniques, however, mainly focus on static graphs and lack the capability to simultaneously learn both temporal and spatial/structural features. Node classification in temporal attributed graphs is challenging for two … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 64 publications
(37 citation statements)
references
References 0 publications
0
36
0
1
Order By: Relevance
“…The LSTM network is a powerful approach to learn the longterm dependency of sequential data (Xu et al 2019b;. The calculation process of the LSTM unit (applied to each time step) is described in Eqs.…”
Section: The Basic Lstm Networkmentioning
confidence: 99%
“…The LSTM network is a powerful approach to learn the longterm dependency of sequential data (Xu et al 2019b;. The calculation process of the LSTM unit (applied to each time step) is described in Eqs.…”
Section: The Basic Lstm Networkmentioning
confidence: 99%
“…While there have been a few attempts at applying factorization techniques (Zhou et al 2014;Stamile et al 2017;Kidzinski and Hastie 2018), and deep representation learning techniques (Xu et al 2019a;2019b), their primary focus is to improve the predictive accuracy. These techniques do not explicitly account for the complex correlation structure in the data or distinguish between random effects and fixed effects.…”
Section: Related Workmentioning
confidence: 99%
“…Decoder To capture the sequential dependency in the correct words, the Gated Recurrent Unit (GRU) which has achieved great performance in many sequence learning tasks (Xu et al 2019b;Andermatt, Pezold, and Cattin 2016;Xu et al 2019a) is used as the decoder. To be specific, in the decoding process, the initial hidden state h 0 of GRU is initialized with the noised word presentationŵ.…”
Section: Learning Character-level Dependenciesmentioning
confidence: 99%