Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467450
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Transfer Learning on Graph Neural Networks

Abstract: Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(13 citation statements)
references
References 19 publications
0
11
0
Order By: Relevance
“…Analogously, MPG [Li et al ., 2021b] learns to compare two half-graphs (each decomposed from a graph sample) and discriminate whether they come from the same source. Despite the progress in molecular graph pre-training, few efforts have been devoted to the fine-tuning except for a recent work [Han and others., 2021] that adaptively selects and combines various auxiliary tasks with the target task in the fine-tuning stage to improve performance, which is impractical because auxiliary tasks are often unavailable during fine-tuning.…”
Section: Related Workmentioning
confidence: 99%
“…Analogously, MPG [Li et al ., 2021b] learns to compare two half-graphs (each decomposed from a graph sample) and discriminate whether they come from the same source. Despite the progress in molecular graph pre-training, few efforts have been devoted to the fine-tuning except for a recent work [Han and others., 2021] that adaptively selects and combines various auxiliary tasks with the target task in the fine-tuning stage to improve performance, which is impractical because auxiliary tasks are often unavailable during fine-tuning.…”
Section: Related Workmentioning
confidence: 99%
“…Fine-tuning is a dominant technique to adapt the knowledge to various downstream tasks, whereas it is confronted with the issue of catastrophic forgetting [52], which means the MPMs often forget their learned knowledge during fine-tuning. To alleviate this issue, Han et al [30] adaptively select and combine various pretraining tasks along with the target tasks in the fine-tuning stage to achieve a better adaptation. This strategy preserves sufficient knowledge captured by self-supervised pre-training tasks and improves the effectiveness of transfer learning on molecular pre-training.…”
Section: Towards Better Knowledge Transfermentioning
confidence: 99%
“…Our Progressive Network configuration facilitates the search to find from which layers we retrain the network. In credit risk, where the data labels are scarce, the approach such as transferring learned knowledge from self-supervised tasks to downstream tasks could improve the performance of the network (Han et al, 2021 ).…”
Section: Related Studiesmentioning
confidence: 99%