2021
DOI: 10.48550/arxiv.2110.04525
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generating Disentangled Arguments with Prompts: A Simple Event Extraction Framework that Works

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…It reformulates various NLP tasks as clozestyle questions, and by doing so, the knowledge stored in PLMs can be fully exploited, making PLMs achieve impressive performance in few-shot and zero-shot settings. Along this research line, various types of prompts are explored including discrete and continuous prompts (Gao et al, 2021a;Shin et al, 2020;Cui et al, 2021;Si et al, 2021;Li and Liang, 2021;Schick and Schütze, 2021b). In this work, we exploit prompts of different downstream tasks to assign various virtual semantic prototypes to each instance.…”
Section: Prompt-based Learningmentioning
confidence: 99%
“…It reformulates various NLP tasks as clozestyle questions, and by doing so, the knowledge stored in PLMs can be fully exploited, making PLMs achieve impressive performance in few-shot and zero-shot settings. Along this research line, various types of prompts are explored including discrete and continuous prompts (Gao et al, 2021a;Shin et al, 2020;Cui et al, 2021;Si et al, 2021;Li and Liang, 2021;Schick and Schütze, 2021b). In this work, we exploit prompts of different downstream tasks to assign various virtual semantic prototypes to each instance.…”
Section: Prompt-based Learningmentioning
confidence: 99%