2021
DOI: 10.48550/arxiv.2112.10006
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Zero-shot and Few-shot Learning with Knowledge Graphs: A Comprehensive Survey

Abstract: Machine learning methods especially deep neural networks have achieved great success but many of them often rely on a number of labeled samples for training. In real-world applications, we often need to address sample shortage due to e.g., dynamic contexts with emerging prediction targets and costly sample annotation. Therefore, low-resource learning, which aims to learn robust prediction models with no enough resources (especially training samples), is now being widely investigated. Among all the low-resource… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 152 publications
(469 reference statements)
0
3
0
Order By: Relevance
“…Currently, there is little knowledge how the ML-based models can be transferred to remote areas where little or no OSM data is available. Such a task of generalizing a new ML model from only a few samples can be formulated as a few-shot learning task, which has recently received growing research interest in the ML community (Chen et al, 2021;Wang, Yao, et al, 2020). For geographical applications, few-shot learning techniques have been seldom applied.…”
Section: Introductionmentioning
confidence: 99%
“…Currently, there is little knowledge how the ML-based models can be transferred to remote areas where little or no OSM data is available. Such a task of generalizing a new ML model from only a few samples can be formulated as a few-shot learning task, which has recently received growing research interest in the ML community (Chen et al, 2021;Wang, Yao, et al, 2020). For geographical applications, few-shot learning techniques have been seldom applied.…”
Section: Introductionmentioning
confidence: 99%
“…To be specific, OntoPrompt can obtain 25.6% F1 with 1% data, in comparison to 5.2% in MQAEE and 3.4% in TEXT2EVENT. Although the performance of OntoPrompt on the full sample is slightly weaker than that of JMEE, which relies on external data augmentation, we believe that OntoPrompt can effectively identify triggers and 4 We only report the performance of event argument classification due to page limit. arguments with less data dependence.…”
Section: Resultsmentioning
confidence: 99%
“…To address the few-shot issue, on the one hand, researchers apply the meta-learning strategy to endow the new model the ability to optimize rapidly with the existing training knowledge or leverage transfer learning to alleviate the challenge of data-hungry [4,66]. Benefiting from the self-supervised pre-training on the large corpus, the pre-train-fine-tune paradigm has become the de facto standard for natural language processing (NLP).…”
Section: Introductionmentioning
confidence: 99%