Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining 2023
DOI: 10.1145/3539597.3570480
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Distill Graph Neural Networks

Abstract: Continual learning (CL) aims to learn new tasks without forgetting previous tasks. However, existing CL methods require a large amount of raw data, which is often unavailable due to copyright considerations and privacy risks. Instead, stakeholders usually release pre-trained machine learning models as a service (MLaaS), which users can access via APIs. This paper considers two practical-yet-novel CL settings: data-efficient CL (DECL-APIs) and data-free CL (DFCL-APIs), which achieve CL from a stream of APIs wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 44 publications
0
1
0
Order By: Relevance
“…In addition, UGT [9] focus on the compression of model parameters and finds a well-performing subnetwork without any training of the model. A more recent work [30] proposes DGLT to prune the GNN parameters by incremental regularization [29] and hierarchical sparsification on input graph, which enables the search of GLT from a dual perspective. In addition to the general GLT, GEBT [34] proves that the early-bird ticket, a winning ticket that can be extracted in the early stages of training [33], also exists in GNNs.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, UGT [9] focus on the compression of model parameters and finds a well-performing subnetwork without any training of the model. A more recent work [30] proposes DGLT to prune the GNN parameters by incremental regularization [29] and hierarchical sparsification on input graph, which enables the search of GLT from a dual perspective. In addition to the general GLT, GEBT [34] proves that the early-bird ticket, a winning ticket that can be extracted in the early stages of training [33], also exists in GNNs.…”
Section: Related Workmentioning
confidence: 99%