Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512164
|View full text |Cite
|
Sign up to set email alerts
|

TREND: TempoRal Event and Node Dynamics for Graph Representation Learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(21 citation statements)
references
References 25 publications
0
21
0
Order By: Relevance
“…(2) TREND. In [88], the formation of each edge ((𝑣 𝑖 , 𝑣 𝑗 ), 𝑡) is defined as an event with unique properties specified by the states of end nodes (𝑣 𝑖 , 𝑣 𝑗 ) at time step 𝑡. Namely, events may occur due to different reasons at different time steps.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…(2) TREND. In [88], the formation of each edge ((𝑣 𝑖 , 𝑣 𝑗 ), 𝑡) is defined as an event with unique properties specified by the states of end nodes (𝑣 𝑖 , 𝑣 𝑗 ) at time step 𝑡. Namely, events may occur due to different reasons at different time steps.…”
Section: Discussionmentioning
confidence: 99%
“…On the one hand, the encoders and loss functions of some methods cannot capture the variation of weighted topology. For instance, most approaches with the unevenly-spaced edge sequence description rely on some stochastic processes on unweighted graph topology (e.g., temporal random walk [61,87] and temporal point process [52,88,89]), which do not have the hypothesis regarding dynamic edge weights. On the other hand, the decoders of some approaches [48,49,74,81] are only designed for unweighted graphs, treating TLP as a binary edge classification task.…”
Section: Advanced Topics and Future Directionsmentioning
confidence: 99%
“…To prevent the deficiency, another line of work aims to learn the temporal processing of continuously evolving events. DyRep [25] and its variants [26,27] employ temporal Hawkes process to model the temporal properties in networks. CTDNE [28] extends the standard random walk to time-respect random walk, FiGTNE [29] proposes timereinforced random walk to embed fine-grained networks, and CAW-N [9] leverages causal anonymous walk to capture motifs in temporal graphs.…”
Section: Related Workmentioning
confidence: 99%
“…In vanilla NP, it infers the distribution of link prediction function 𝑃 (𝑧 𝑇 |G ≤𝑇 ) from historical context data, and fixes it for future link prediction. However, in dynamic graphs, the links arrive in different frequencies [5,13,36]. Therefore, it is inappropriate to use a static distribution over time, although we can incorporate the new-coming links into the context data and update the distribution by using Eq.…”
Section: Sequential Ode Aggregatormentioning
confidence: 99%
“…SNP [32] adopts a RNN aggregator to consider the sequential information, but it still fails to consider the derivative of the underlying distribution and uses a static distribution through time. For instance, as shown in Figure 2b, the links in dynamic graphs arrive irregularly [5,13,36]. Additionally, an important event could lead to a large number of links occurring in a short time, which is shown by the spikes in Figure 2b.…”
Section: Introductionmentioning
confidence: 99%