2021
DOI: 10.48550/arxiv.2110.08454
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER

Abstract: Recent advances in prompt-based learning have shown impressive results on few-shot text classification tasks by using cloze-style language prompts. There have been attempts on prompt-based learning for NER which use manually designed templates to predict entity types. However, these two-step methods may suffer from error propagation (from entity span detection), need to prompt for all possible text spans which is costly, and neglect the interdependency when predicting labels for different spans in a sentence. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…Schick et al [28] proposed an approach, PET, which consists of defining pairs of cloze question patterns and verbalizers that help leverage the knowledge contained within pre-trained language models for downstream tasks. It has achieved outstanding performance in widespread NLP tasks [29][30][31], especially in information extraction tasks [3,[32][33][34][35]. Cui et al [32] proposed a template-based method for NER, treating NER as a language model ranking problem in a sequence-to-sequence framework, where original sentences and statement templates filled by candidate named entity span are regarded as the source sequence and the target sequence, respectively.…”
Section: Prompt Tuningmentioning
confidence: 99%
See 1 more Smart Citation
“…Schick et al [28] proposed an approach, PET, which consists of defining pairs of cloze question patterns and verbalizers that help leverage the knowledge contained within pre-trained language models for downstream tasks. It has achieved outstanding performance in widespread NLP tasks [29][30][31], especially in information extraction tasks [3,[32][33][34][35]. Cui et al [32] proposed a template-based method for NER, treating NER as a language model ranking problem in a sequence-to-sequence framework, where original sentences and statement templates filled by candidate named entity span are regarded as the source sequence and the target sequence, respectively.…”
Section: Prompt Tuningmentioning
confidence: 99%
“…Cui et al [32] proposed a template-based method for NER, treating NER as a language model ranking problem in a sequence-to-sequence framework, where original sentences and statement templates filled by candidate named entity span are regarded as the source sequence and the target sequence, respectively. Lee et al [33] proposed demonstration-based learning, a simple-yet-effective way to incorporate automatically constructed auxiliary supervision. Instead of reformatting the NER task into the cloze-style template, they augment the original input instances by appending automatically created task demonstrations.…”
Section: Prompt Tuningmentioning
confidence: 99%
“…This paradigm requires no parameter updates and can achieve excellent results with just a few examples from downstream tasks. Since the effect of ICL is strongly related to the choice of demonstration examples, recent studies have explored several effective example selection methods, e.g., similaritybased retrieval method (Liu et al, 2021;Rubin et al, 2021), validation set scores based selection (Lee et al, 2021), gradient-based method (Wang et al, 2023b). These results indicate that reasonable example selection can improve the performance of LLMs.…”
Section: In-context Learningmentioning
confidence: 99%