Proceedings of the 2021 International Conference on Multimedia Retrieval 2021
DOI: 10.1145/3460426.3463604
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Initialization Graph Meta-Learning for Node Classification

Abstract: Meta-learning aims to acquire common knowledge from a large amount of similar tasks and then adapts to unseen tasks within few gradient updates. Existing graph meta-learning algorithms show appealing performance in a variety of domains such as node classification and link prediction. These methods find a single common initialization for entire tasks and ignore the diversity of task distributions, which might be insufficient for multi-modal tasks. Recent approaches adopt modulation network to generate task-spec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…However, the all-pair attention incurs O(N 2 ) complexity and becomes a computation bottleneck that limits most Transformers to handling only small-sized graphs (with up to hundreds of nodes). For larger graphs, recent efforts have resorted to strategies such as sampling a small (relative to N ) subset of nodes for attention computation [53] or using ego-graph features as input tokens [54]. However, these strategies sacrifice the expressivity needed to capture all-pair interactions among arbitrary nodes.…”
Section: Preliminary and Related Workmentioning
confidence: 99%
“…However, the all-pair attention incurs O(N 2 ) complexity and becomes a computation bottleneck that limits most Transformers to handling only small-sized graphs (with up to hundreds of nodes). For larger graphs, recent efforts have resorted to strategies such as sampling a small (relative to N ) subset of nodes for attention computation [53] or using ego-graph features as input tokens [54]. However, these strategies sacrifice the expressivity needed to capture all-pair interactions among arbitrary nodes.…”
Section: Preliminary and Related Workmentioning
confidence: 99%
“…Meta-GNN (Zhou et al 2019a) follows the episodic training paradigm and uses GNNs as the meta-learner for few-shot node classification. Some works focus on augmenting representation to enhance receptive fields for efficient information propagation (Ding et al 2022;Zhao, Wang, and Xiang 2021;Huang and Zitnik 2020;Liu et al 2021Liu et al , 2019Lan et al 2020;Liu et al 2020). GFL (Yao et al 2020) draws the support from auxiliary graphs to improve generalization ability on the target graph.…”
mentioning
confidence: 99%
“…RALE and CNL (Liu et al 2021;) aim to capture the potential long-ranged dependencies to learn node embedding. Some works extract subgraphs to augment the representation (Zhao, Wang, and Xiang 2021;Huang and Zitnik 2020;Ma et al 2020;Zhang et al 2020). As discussed, richer structures of graphs, including subgraphs, long-ranged dependencies, etc., can be considered useful knowledge for meta-learning.…”
mentioning
confidence: 99%