Meta-learning aims to acquire common knowledge from a large amount of similar tasks and then adapts to unseen tasks within few gradient updates. Existing graph meta-learning algorithms show appealing performance in a variety of domains such as node classification and link prediction. These methods find a single common initialization for entire tasks and ignore the diversity of task distributions, which might be insufficient for multi-modal tasks. Recent approaches adopt modulation network to generate task-specific parameters for further achieving multiple initializations, which shows excellent performance for multi-modal image classification. However, different from image classification, how to design an effective modulation network to handle graph-structure dataset is still challenging. In this paper, we propose a Multi-Initialization Graph Meta-Learning (MI-GML) network for graph node classification, mainly consisting of local and global modulation neworks and meta learner. In terms of modulation network, we exploit local and global graph structure information to extract task-specific modulation parameters. On this basis, the meta learner is further modulated by the corresponding modulation parameter to produce task-specific representation for node classification. Experimental results on three graph-structure datasets demonstrate the effectiveness of MI-GML in few-shot node classification tasks.
Representation learning in dynamic graphs is a challenging problem because the topology of graph and node features vary at different time. This requires the model to be able to effectively capture both graph topology information and temporal information. Most existing works are built on recurrent neural networks (RNNs), which are used to exact temporal information of dynamic graphs, and thus they inherit the same drawbacks of RNNs. In this paper, we propose Learning to Evolve on Dynamic Graphs (LEDG) -a novel algorithm that jointly learns graph information and time information. Specifically, our approach utilizes gradient-based metalearning to learn updating strategies that have better generalization ability than RNN on snapshots. It is model-agnostic and thus can train any message passing based graph neural network (GNN) on dynamic graphs. To enhance the representation power, we disentangle the embeddings into time embeddings and graph intrinsic embeddings. We conduct experiments on various datasets and down-stream tasks, and the experimental results validate the effectiveness of our method.
Few-shot learning aims to generalize to novel classes. It has achieved great success in image and text classification tasks. Inspired by such success, few-shot node classification in homogeneous graph has attracted much attention but few works have begun to study this problem in Heterogeneous Information Network (HIN) so far. We consider few-shot learning in HIN and study a pioneering problem HIN Few-Shot Node Classification (HIN-FSNC) that aims to generalize the node types with sufficient labeled samples to unseen node types with only few-labeled samples. However, existing HIN datasets contain just one labeled node type, which means they cannot meet the setting of unseen node types. To facilitate the investigation of HIN-FSNC, we propose a large-scale academic HIN dataset called HINFShot. It contains 1,235,031 nodes with four node types (author, paper, venue, institution) and all the nodes regardless of node type are divided into 80 classes. Finally, we conduct extensive experiments on HINFShot and the result indicates a significant challenge of identifying novel classes of unseen node types in HIN-FSNC. CCS CONCEPTS• Information systems → Data mining; • Computing methodologies → Supervised learning by classification.
Representation learning in dynamic graphs is a challenging problem because the topology of graph and node features vary at different time. This requires the model to be able to effectively capture both graph topology information and temporal information. Most existing works are built on recurrent neural networks (RNNs), which are used to exact temporal information of dynamic graphs, and thus they inherit the same drawbacks of RNNs. In this paper, we propose Learning to Evolve on Dynamic Graphs (LEDG) - a novel algorithm that jointly learns graph information and time information. Specifically, our approach utilizes gradient-based meta-learning to learn updating strategies that have better generalization ability than RNN on snapshots. It is model-agnostic and thus can train any message passing based graph neural network (GNN) on dynamic graphs. To enhance the representation power, we disentangle the embeddings into time embeddings and graph intrinsic embeddings. We conduct experiments on various datasets and down-stream tasks, and the experimental results validate the effectiveness of our method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.