2021
DOI: 10.48550/arxiv.2108.02035
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification

Abstract: Tuning pre-trained language models (PLMs) with task-specific prompts has been a promising approach for text classification. Particularly, previous studies suggest that prompttuning has remarkable superiority in the lowdata scenario over the generic fine-tuning methods with extra classifiers. The core idea of prompt-tuning is to insert text pieces, i.e., template, to the input and transform a classification problem into a masked language modeling problem, where a crucial step is to construct a projection, i.e.,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
41
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 36 publications
(42 citation statements)
references
References 19 publications
1
41
0
Order By: Relevance
“…Prompting's advantage and effectiveness in a wide range of NLP applications has been verified in recent literature, including text classification (Hu et al, 2021;Min et al, 2021;, entity typing , few-shot learning Xu et al, 2021a;Zhao et al, 2021;, relation extraction Han et al, 2021b;Sainz et al, 2021), knowledge probing (Zhong et al, 2021), named entity recognition (Chen et al, 2021b), machine translation (Tan et al, 2021;Wang et al, 2021b) and dialogue system (Wang et al, 2021a).…”
Section: Related Workmentioning
confidence: 95%
“…Prompting's advantage and effectiveness in a wide range of NLP applications has been verified in recent literature, including text classification (Hu et al, 2021;Min et al, 2021;, entity typing , few-shot learning Xu et al, 2021a;Zhao et al, 2021;, relation extraction Han et al, 2021b;Sainz et al, 2021), knowledge probing (Zhong et al, 2021), named entity recognition (Chen et al, 2021b), machine translation (Tan et al, 2021;Wang et al, 2021b) and dialogue system (Wang et al, 2021a).…”
Section: Related Workmentioning
confidence: 95%
“…[20] propose an approach called PTR, which leverages logic rules to construct prompts with sub-prompts for many-class text classification. [22] propose an approach to incorporate external knowledge graph into the verbalizer with calibration. [6] propose a knowledge-aware prompt-tuning approach that injects knowledge into prompt template design and answer construction.…”
Section: Prompt-tuningmentioning
confidence: 99%
“…Although the expected answer may be in the form of tokens, spans, or sentences in token-level promptlearning, the predicted answer is always generated token by token. Tokens are usually mapped to the whole vocabulary or a set of candidate words (Petroni et al 2019;Cui et al 2021;Han et al 2021;Adolphs, Dhuliawala, and Hofmann 2021;Hu et al 2021). Take PET model (Schick and Schütze 2021b,a) as an example, the sentiment classification input/label pair is reformulated to "x: [CLS] The Italian team won the European Cup.…”
Section: Token-level and Sentence-levelmentioning
confidence: 99%