2021
DOI: 10.48550/arxiv.2109.00720
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

LightNER: A Lightweight Tuning Paradigm for Low-resource NER via Pluggable Prompting

Abstract: Most existing NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data. Recently, prompt-tuning methods for pre-trained language models have achieved remarkable performance in few-shot learning by exploiting prompts as task guidance to reduce the gap between training progress and downstream tuning. Inspired by prompt learning, we propose a novel lightweight generative framework with promptguided attention for low-resource NER (Light… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 38 publications
0
5
0
Order By: Relevance
“…Prompting's advantage and effectiveness in a wide range of NLP applications has been verified in recent literature, including text classification (Hu et al, 2021;Min et al, 2021;, entity typing , few-shot learning Xu et al, 2021a;Zhao et al, 2021;, relation extraction Han et al, 2021b;Sainz et al, 2021), knowledge probing (Zhong et al, 2021), named entity recognition (Chen et al, 2021b), machine translation (Tan et al, 2021;Wang et al, 2021b) and dialogue system (Wang et al, 2021a).…”
Section: Related Workmentioning
confidence: 95%
“…Prompting's advantage and effectiveness in a wide range of NLP applications has been verified in recent literature, including text classification (Hu et al, 2021;Min et al, 2021;, entity typing , few-shot learning Xu et al, 2021a;Zhao et al, 2021;, relation extraction Han et al, 2021b;Sainz et al, 2021), knowledge probing (Zhong et al, 2021), named entity recognition (Chen et al, 2021b), machine translation (Tan et al, 2021;Wang et al, 2021b) and dialogue system (Wang et al, 2021a).…”
Section: Related Workmentioning
confidence: 95%
“…Whereas, additive attention scores between word queries and picture features were used to weigh and calculate the visual attention features using VGG-19 visual features (Simonyan and Zisserman, 2014). Chen S. et al (2020) andChen X. et al (2021) extracted visual information into subtitles and proposed a softer method of image-text combination that improves the fusion of different modal features.…”
Section: Named Entity Recognitionmentioning
confidence: 99%
“…Prompt-tuning for pre-trained language models is a rapidly emerging field in natural language processing [40,46,71] and have attracted lots of attention. Originally from GPT-3, prompt-tuning has been applied to various of tasks including relation extraction [20], event extraction [21,59], named entity recognition [5,7], entity typing [13], and so on. To facilitate the labor-intensive prompt engineering, [43] propose AU-TOPROMPT, which can search prompts based on a gradient method to select label words and templates.…”
Section: Prompt-tuningmentioning
confidence: 99%