Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/438
|View full text |Cite
|
Sign up to set email alerts
|

DyGRAIN: An Incremental Learning Framework for Dynamic Graphs

Abstract: Knowledge distillation aims to transfer the information by minimizing the cross-entropy between the probabilistic outputs of the teacher and student network. In this work, we propose an alternative distillation objective by maximizing the scoring rule, which quantitatively measures the agreement of a distribution to the reference distribution. We demonstrate that the proper and homogeneous scoring rule exhibits more preferable properties for distillation than the original cross entropy based approach. To … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 0 publications
0
0
0
Order By: Relevance
“…MSCGL (Cai et al 2022) is designed for multimodal graphs with neural architectural search. DyGRAIN (Kim, Yun, and Kang 2022) explores the adaptation of receptive fields while distilling knowledge. Architectural methods modify the neural architecture of graph model itself, such as FGN (Wang et al 2022).…”
Section: Continual Graph Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…MSCGL (Cai et al 2022) is designed for multimodal graphs with neural architectural search. DyGRAIN (Kim, Yun, and Kang 2022) explores the adaptation of receptive fields while distilling knowledge. Architectural methods modify the neural architecture of graph model itself, such as FGN (Wang et al 2022).…”
Section: Continual Graph Learningmentioning
confidence: 99%
“…In general, it aims at gradually learning new knowledge without catastrophic forgetting the old ones across sequentially coming tasks. Centered around fighting with forgetting, a series of methods (Kim, Yun, and Kang 2022;Galke et al 2021) have been proposed recently. Despite the success of prior works, continual graph learning still faces tremendous challenges.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation